What do you think about the CShell?

a5cent

New member
Nov 3, 2011
6,622
0
0
Visit site
I probably would although I seem to be surrounded by people who seem to have a pretty decent comprehension when I tell them technical stuff (and no, they are not from technical background). So I usually go for a little bit more technical and accurate explanation. In that case I can just say "W10 is an OS that can run two types of applications" and they would be completely fine with it.

But I get your drift and if you see it's better to put it that way then go ahead.

Any system that can run a virtual machine can run multiple types of apps. A Java application and a Python application are also two different types of apps, but they are a very poor analogy for the differences between Win32 and UWP. Windows 95 could also run two types of apps, Win16 and Win32 applications. That too is a very poor analogy. The "two types of apps" explanation is entirely insufficient to make the point I'm trying to make. IMHO that's the opposite of providing a more technical and accurate explanation. ;-)

UWP and Win32 have two completely different app models (what Win32 provides can barely be called that). UWP and Win32 have two entirely different security models. The entire UI compositing engine employed by UWP and Win32 are different and abide by different rules. Of course the APIs are also different.

With one exception, everything above is attributed to the OS. APIs are also often attributed to the OS, but don't have to be. Due to those differences, and many more, I think it's far better to consider W10 to be comprised of two different OSes rather than being one OS that can run two different types of apps. The differences reach a lot further down in the software stack than just the application layer! If you want your none technical friends to also think about it in that way, telling them it's one OS that runs two different kinds of apps is exactly the disservice I'm talking about here. That oversimplifies to the point of it being wrong.

Hope I could clear up why I'm not a fan of the "one OS with two types of apps" explanation ;-)
 

mattiasnyc

New member
Nov 20, 2016
419
0
0
Visit site
@mattiasnyc
Wow... I think you managed to misunderstand pretty much everything in my last post that could possibly have been misunderstood. Sorry for that. Hope I can clear things up.

No. Nothing I said was related to market share.

I don't think you can have disruption without at least one major-market-share-holding enterprise being a part of 'it', whatever that feature is. If you want to argue that the iPhone was disruptive then I agree with you. And obviously it had 0% market share at introduction. So, ask yourself this: If it, and any derivative product, had continued to a maximum 1% market share, would it still have been a disruptive technology? Of course not.

Pointing out that market share has nothing to do with it because in the beginning any new technology has zero percent is just silly. That's OBVIOUSLY not the point I'm making.

MS has no choice but to introduce something unique, that is highly desirable for many people, and which is easily marketable. That's what I call a killer feature. Without such a killer feature MS will never get off the ground in the mobile space.

If it has nothing to do with market share then why are the above criteria necessary? Why does it have to be desirable and marketable? It's obviously because it needs to sell to be disruptive in the market. So we're really just back to arguing about the definition of "killer feature" and whether or not it includes having a large enough market share to be able to disrupt the market.

So, looking at a brief description of "killer" it's pretty much exactly as I interpret the word. ( adjective, Slang. 6. severe; powerful: a killer cold. 7. very difficult or demanding: a killer chess tournament. 8. highly effective; superior: ) It has to do with the intrinsic value of the feature, not whether or not it's adopted by the market (which is required for it to be disruptive), or if it is disruptive.

In regard to the term "killer feature" we'll have to accept that we define it differently. For you it applies to anything you think deserves to be very popular. For me it applies only to disruptive technologies that actually become very popular and change market dynamics.

See how you got that backwards? You say I think it applies to what deserves to be popular, but I haven't said that. I merely said that a killer feature is a feature that is great on its own merits. That has nothing to do with whether or not I think it deserves to be popular. The latter doesn't cause the former. You then move on to talk about the technology being disruptive which, again, you say yourself is changing the market, which means the prerequisite for something being disruptive is it (eventually) gaining a significant market share - i.e. popularity. You're the one arguing that popularity matters, not me, in contrast to what you say above.

If I had hours I could fill multiple pages. Even if I limited myself to only the last 60 years of IT technology I could still fill pages. Obviously all of that pales in comparison to the enormous number of iterative improvements made to existing technologies every single day all over the world. We obviously don't have "more disruption than the status quo".

Careful though. If you're going to dismiss some technology because it's iterative rather than inventive you'll have to make sure the iPhone actually invented the tech you think made it unique which in turn led to it being disruptive. There were touch screen phones before the iPhone, and there were internet connected phones before the iPhone.

Your usage of the iPhone as a "disruptive technology" really only works if you view "technology" as "product". There was a market for mobile phones, and one product, the iPhone, disrupted that market. Not because of the individual tech I just mentioned, but because it was an appealing product that people wanted. The fact that they bought the device led to a disruption in the market. If it had failed it would still have had the same technology.

Agreed. The vision is great! The problem with it is that, again, neither CShell nor Continuum get us anywhere close to that vision. Both are enabling technologies.

In your own words: What do they enable?

In the interest of making my objection somewhat more practical, we must only consider that many people will want many of those screens to run software for other OSes (iOS, Android, OSC, etc). In practice, many people will be purchasing multiple CPUs either way. That's where this house of cards comes crumbling down. It can't actually replace the devices you're saying it will. For the same reason it also lacks the ability to save customers money.

The only people whom this could potentially serve are those who explicitly require multiple Windows devices. That excludes most consumers right off the bat.

Using dumb terminals to serve any OS is the smart thing to do because it gives you flexibility to move between OS. Rather than buying an iMac you get a screen. Then you hook up your iPhone to it, or your Android, or your Win phone. That to me makes 100% sense. If you then want to argue that users are stuck buying several phones, or several tablets, then fine. My prediction is that people will pick the OS that feels the best and use that throughout their ecosystem. So I don't agree with your prediction.

Agreed. I wasn't talking about cabling. I was talking about the fact that you actually need separate peripherals, irrespective of how they are connected. This entire concept only makes sense if the consumer actually wants multiple screens running Windows software. IMHO most consumers don't want that. If the average consumer only wants one device that runs Windows, then there is no point in separating the CPU from everything else. Most consumers will view that as MS just making things more complicated than necessary. Consumers who only want one device that runs Windows, and who don't require the power of a desktop, will prefer a laptop/ultrabook/etc where all the required peripherals come bundled in one package. That was my point.

I don't agree with that. You would have to explain just why my friends have an iPhone, an iPad and a Mac desktop or laptop. If they only want one form factor for i/o, why did they buy all of those devices? Why didn't they just pick the most powerful one, the one that satisfies the most demanding needs, and use only that?

I think people gravitate towards different i/o devices because they offer superior convenience and functionality in different situations. You can't carry your desktop with you easily. You can carry your laptop with you, but it's still a bit of a nuisance, and you've then downgraded your screensize. You can bring your tablet no problem, but how nice is it to make phone calls with it? How many people do that? Your smartphone is great; it's portable, it's connected, it can do almost everything.... and yet when you come home do you watch movies and sports on it or do you use a big screen again? Do you do all your excel and word processing on it or do you want a 24"+ screen and keyboard and mouse to go with whatever device you're using?

Consumers want an experience that is fluid in size and type of i/o, yet I guarantee you they'd love it if the experience also was consistent - which is what you seem to agree on.
 

a5cent

New member
Nov 3, 2011
6,622
0
0
Visit site
If you want to argue that the iPhone was disruptive then I agree with you. And obviously it had 0% market share at introduction. So, ask yourself this: If it, and any derivative product, had continued to a maximum 1% market share, would it still have been a disruptive technology? Of course not.

This is the core of our disagreement and I'm not sure why you don't yet see it. For you a product or technology can only receive the label "disruptive" after it has been introduced to market and achieved widespread commercial success (or something like that). I don't see it that way. If the realistic intent of a new product or technology is to fundamentally change the dynamics of an existing market or create a new one, then I'd call it disruptive, even if it has 0% market share. That was the point I was trying to make. If it helps us come to an agreement and prevents us from going around in circles, then we can also just call it "potentially disruptive". That's certainly the more precise expression.

If you still have no better go-to than to claim "sillyness", then we're likely stuck.

IMHO CShell and Continuum are not even potentially disruptive, because they can't fundamentally change anything about the market by themselves. IMHO there is nothing you could say that would convince a notable number of consumers that CShell and Continuum are things they need! End of story.

In your own words: What do they [CShell + Continuum] enable?

Continuum = an API that helps software re-composite their UI in a somewhat standardized way, with the goal of helping them more easily adapt to different display sizes (this is not even a user facing technology. It's for developers).
CShell = A Continuum enabled launcher for Windows

As I said already, some other killer application that is of more utility to the average Joe may be built on top of those enabling technologies, but by themselves they will achieve little.

Careful though. If you're going to dismiss some technology because it's iterative rather than inventive you'll have to make sure the iPhone actually invented the tech you think made it unique which in turn led to it being disruptive.

I would say that combining existing technology in new ways counts as inventive, particularly if the new combination has the potential to shake up the existing order. That's what the iPhone did. I really couldn't care less if a (potentially) disruptive product or technology was invented from scratch or assembled from 50 year old parts.

If it has nothing to do with market share then why are the above criteria necessary? Why does it have to be desirable and marketable? It's obviously because it needs to sell to be disruptive in the market.

This again points to the core of our argument. I'd say it's the exact other way around. IMHO whatever MS provides must be (potentially) disruptive to have any chance of taking market share at all. Why? Because the leaders are now so entrenched that MS could slightly improve on everything their competitors do and still get absolutely nowhere. MS has no choice but to make inroads by doing something different!

The terms "unique" and "desirable" refer to the potential to be disruptive, i.e. doing something in a novel and useful way. Only if those requirements are met can we turn towards the next step, which is successfully commercializing it. That's where the term "marketable" comes in.

See how you got that backwards? You say I think it applies to what deserves to be popular, but I haven't said that. I merely said that a killer feature is a feature that is great on its own merits.

Trying to get a smugness award? Anyway, answer me this: Who decides whether or not a feature is great on its own merits? You?

I don't agree with that. You would have to explain just why my friends have an iPhone, an iPad and a Mac desktop or laptop. If they only want one form factor for i/o, why did they buy all of those devices?

Nowhere did I say people want only one form-factor of device. What I said is that people only want one form-factor of device to run Win32 Windows software. Completely different thing.

I know almost nobody that doesn't have at least three devices. I also know almost nobody that has more than one device to run Win32 software. That's where Continuum crumbles. It only makes sense for people who want to run Win32 software on more than one display. That's a very small and shrinking market. If you disagree with that we'll have to agree to disagree.
 
Last edited:

PerfectReign

New member
Aug 25, 2016
859
0
0
Visit site
Interesting comments here. I'm wondering what - if anything- will come out of the latest Intel spat regarding Windows on ARM.

I still see cshell as an extension on OneCore and Nadella's vision of mobile first cloud first. In this instance apps run anywhere and the data are in the cloud.
 

nate0

New member
Mar 1, 2015
3,607
0
0
Visit site
Interesting comments here. I'm wondering what - if anything- will come out of the latest Intel spat regarding Windows on ARM.

I still see cshell as an extension on OneCore and Nadella's vision of mobile first cloud first. In this instance apps run anywhere and the data are in the cloud.

Probably not much of anything but noise if you ask me.
 

BackToTheFuture

New member
Aug 9, 2013
44
0
0
Visit site
LOL, people got so worked up over this.

Composable shell, in a nutshell, is another major stepping stone in unifying Windows code base, but not the end all be all revolutionary magic. From now on, it takes less resources to develop Windows for all platforms - each platform still has a separate Windows version, sharing a big chunk of code, including the kernel, windowing server (which is CShell) and UWP subsystem (obviously I oversimplified things here, the OS architecture is much more complex). Any change to this common shared code will trickle to every version. It also doesn't change how UWP apps work, the apps work the same way as they debuted 2 years ago. It also does not mean Win32 subsystem will be available on every platform, since this legacy subsystem is not part of the intended shared code (I believe). Win32 API has not evolved since it was finalized 20 years ago, anyone works directly with Win32 API knows the pain. Microsoft has actively deviated from Win32 since 2000, with the introduction of dotNet.

This advanced windowing server will support transforming devices, for example, a detachable-modular-multi-screen tablet, much better, by appropriately rearranging the GUI layout dynamically, but most people would still only want a traditional slab, as cost is the deciding factor. It doesn't help Windows mobile attract customer any more than it does now.

So no, CShell will not make Windows mobile any more attractive, and Microsoft may never integrate Win32 into WinMobile, although I can see a lot of road warriors would want such devices, I for one also want one. What makes W10M attractive to me, is its ease of use and consistency, plus the beautiful and handy live tiles, and a lot of other subtle things, for example, inline controls in notification, Bing lock screen etc. Continuum will be handy at times, but not the must-have feature.

In order to make Windows on mobile successful, MS needs consumers to buy mobile devices running windows, including phones. So far, Windows phones do not catch on with consumers, for many reasons. Hence MS now targets enterprise sector, where their strength in enterprise services may be enough to entice corporate users. These users, after being familiarized with Windows on mobile and its strength, may like it, much like us here, and recommend WM to their families and friends. And as userbase grows, apps will come, although I doubt that small, regular consumer-oriented apps (payment, shopping etc) would matter at all in a couple of years, they will become plug-in-like services to the OS(s). Sophisticated applications, rewritten in UWP, will be able to run on mobile devices as well, in time. This line of thought has been brought up many times, I just repeat it here.

Have a good day!
 

mikosoft

New member
Jun 15, 2013
64
0
0
Visit site
The differences reach a lot further down in the software stack than just the application layer!

Yes and no. The base of the OS is the kernel which is only one. Kernel runs everything else including system processes like shell, HAL, API libraries, drivers, user processes etc. (okay, that is not completely precisely true but is a good enough abstraction for the purpose I try to make). In this sense there are no two OSes. Bot Win32 and UWP nicely converge on the kernel. Their API libraries are both ran on the same kernel using the same drivers. Yes, compositing engine is different but the same applies - different API library running on the same kernel. Different security model - that one I don't know how exactly it works so there may be some truth in it but I suppose it is again but a library.

I would argue if Java was part of Windows that it would still be one OS - with one more subsystem. If it had GTK+ or Qt libraries for GUI I would still argue it's one OS (albeit one with split personality). It includes Linux subsystem and it still isn't two OSes - the whole Linux subsystem runs on the same kernel. I don't really care what format the applications are, what APIs they use and what libraries they call, as long as they run on the same kernel, it's still the same OS. Even the horrible mess of Windows 9x should be considered as a single OS, not a Windows/DOS hybrid because MS-DOS does start Windows, but after that the Windows kernel takes over and runs DOS (mostly) as a subsystem (in emulation).
 

Drael646464

New member
Apr 2, 2017
2,219
0
0
Visit site
This is the core of our disagreement and I'm not sure why you don't yet see it. For you a product or technology can only receive the label "disruptive" after it has been introduced to market and achieved widespread commercial success (or something like that). I don't see it that way. If the realistic intent of a new product or technology is to fundamentally change the dynamics of an existing market or create a new one, then I'd call it disruptive, even if it has 0% market share. That was the point I was trying to make. If it helps us come to an agreement and prevents us from going around in circles, then we can also just call it "potentially disruptive". That's certainly the more precise expression.
.

Wait what?

The iPhone if it wasn't marketed, and didn't follow the ipod, could have been a total flop. Somethings is only disruptive in the past tense. It's not disruptive, in the future sense, because of hindsight.

Goes the other way too, VR was introduced in the 90s. People a decade later would have called it a flop. Now the technology is catching up, it most certainly will be disruptive.

You don't really know whether it is or not, until its actually happened. Kind of like the absence of evidence is not evidence.

It could very well be that device convergence is extremely disruptive...one day. It is to some degree in tablets, it might also be somewhat in every catergory of device. Or not. Nobody knows till the fat lady sings.

I'm not fond of the word myself. Its buzzwordy. I wouldn't compare for example the iPhone, with the wheel, or the lightbulb, into terms of its actual usefulness and new opportunities offered. Yes its innovative, and does have some great uses, but mainly its used for narcissism and distraction. VR is probably a lot more profound in terms of its potential human good (therapy for example, surgery, design). The iPhone was primarily disruptive in the commercial sense. And it was built using entirely pre-invented technologies. Those other things offer significant technological advantages and could be considered 'innovation' at the more human history level, and in terms of someone actually making something new.

Cshell, at least, and a single OS across hardware platforms is actually new. The iPhone is just a hodgepodge of stuff that was lying around. A hybrid OS might allow people to create/do more stuff. An iPhone doesn't really. It's not really radically different from things that were pre-existing. People had that meme of a fax, a calculator, a radio/tv, a Walkman etc - that's pretty much what a smartphone is, a bundle of other things we could already do, made small for the pocket. For most people, its a swiss army knife of brain bubblegum - an adult pacifier that also does all those things you've forgotten to know how to, like remind you to do things, or calculate or spell (As compared to a normal mobile phone, as it can obviously communicate with actual people you know too)

I'd argue that the main impact of the smartphone has in fact been mostly negative. Indeed the main impact of the internet might even be, on balance, negative.

Perhaps that word distruptive in the negative sense - disruptive to the biologically programmed human condition.
 
Last edited:

Drael646464

New member
Apr 2, 2017
2,219
0
0
Visit site
I haven't heard anything about Cshell.

Its "composable shell", a single adaptive shell/UI across all windows devices. So your mobile phone is identical to your desktop when plugged into a monitor, you can change between an entertainment interface and a productivity interface in windows desktop and on gaming consoles, possibly something in between for tablets, and a separate shell for the incoming dual screen Andromeda device.

It, in addition to UWP, and onecore, is part of the whole "single windows operating system" philosophy Microsoft is working towards long term.

So you have:

One core (same across all Microsoft devices)

The platform specific code/software stack

Composable shell


This way, all windows shares everything but the middle layer, which I suppose they may start to unify once the others are unified - eventually leading to a single "windows" that runs on everything, and just behaves differently depending on whats needed.
 
Last edited:

a5cent

New member
Nov 3, 2011
6,622
0
0
Visit site
Wait what?

The iPhone if it wasn't marketed, and didn't follow the ipod, could have been a total flop. Somethings is only disruptive in the past tense. It's not disruptive, in the future sense, because of hindsight.

<snipped>

You don't really know whether it is or not, until its actually happened. Kind of like the absence of evidence is not evidence.

Who knows? Maybe it comes down to the fact that English isn't my native language and nobody in my area would agree with your definition? It took long enough to figure out where a large part of the disagreement came from. I'd rather focus on the substance than the terminology, so I already decided to go with "potentially disruptive" instead. That hopefully already solved the problem, so lets move on please.
 
Last edited:

a5cent

New member
Nov 3, 2011
6,622
0
0
Visit site
Yes and no. The base of the OS is the kernel which is only one. Kernel runs everything else including system processes like shell, HAL, API libraries, drivers, user processes etc. (okay, that is not completely precisely true but is a good enough abstraction for the purpose I try to make). In this sense there are no two OSes. Bot Win32 and UWP nicely converge on the kernel.

I disagree with your definition of what constitutes the kernel, but these days it's even hard to find two people from Microsoft who agree on that topic, so arguing over it seems pointless ;-) I prefer a more technical definition :
  • If it runs in the CPU's kernel-mode then it belongs to the kernel (parts of MinWin + Driver Infrastructure).
  • If it doesn't run in the CPU's kernel-mode, then it's not part of the kernel (everything else).
I don't want to argue that point though. I'm just providing context for my own usage of the term.

That aside, wouldn't "OneCore" be the better term for what you are describing? OneCore is that part of Windows which is shared between Win32 and UWP which seems to be what you are going for, so I'm going to use "OneCore" to refer to that.

Bot Win32 and UWP nicely converge on the kernel. Their API libraries are both ran on the same kernel using the same drivers. Yes, compositing engine is different but the same applies - different API library running on the same kernel.

Win32 and UWP both converge on OneCore. True. However, OneCore by itself is not a complete personal-computing OS!

For example, assume an OS independent UI library like QT wanted to support the development of UWP apps. While QT has a completely different UI API from the one exposed by UWP, drawing anything on screen requires QT to access UWP's rendering pipeline. That's not just an API, but a big chunk of functionality that defines how and when the descriptions of graphical content are rendered. None of that is part of OneCore! It's specifically part of the Universal Windows Platform, without which you'd never get anything from an UWP app but a black screen.

In regard to the security model, consider that OSes often enforce certain security restrictions. For UWP, an example would be the restriction that apps are sandboxed, one aspect of which is that apps can't write into each other's private storage space. Enforcing this rule has absolutely nothing to do with the UWP API. That aspect of the security model is not even accessible or configurable through any API. This is also not part of OneCore! Enforcing it is part of the Universal Windows Platform run-time environment! Defining security policies and enforcing them are responsibilities typical of an OS.

The point here is that there exists plenty of functionality, of the type that is usually considered the responsibility of the OS, which resides between the UWP API and the shared OneCore. That's the part MS refers to as being the platform (in UWP). I could provide many more examples, but I think it's enough to illustrate the point.

BTW, the opposite is also true. There are about 4GB worth of DLL's that belong only to Win32, very few of which are purely API implementations. Of course I'd only consider a small part of those 4GB strictly OS related, but stuff like advapi32.dll or user32.dll are perfect examples of DLLs that implement OS level features without which no Win32 program will run, but are part of neither OneCore nor UWP.

You also referee to the shell as being part of the kernel, but that is incorrect. The current shell is implemented in shell32.dll and it is specific to Win32. Neither the kernel, nor OneCore nor the UWP know anything about it.

In a nutshell, OneCore is not a complete personal-computing OS, as it lacks much more than just an API through which to access it. For OneCore to provide all the capabilities typically expected of an OS, it must be complimented either with Win32, UWP, or both. However, UWP and Win32 compliment OneCore in very different ways, thereby resembling two OSes. We could get nitpicky and say it's more like 1.5 OSes rather than 2 : 0.5 (OneCore) + 0.5 (Win32) + 0.5 (UWP), but that's probably going to end up being more confusing than helpful.

I agree with all your other points though.

If I still haven't convinced you then I'm probably going to be forced to give up. ;-)
 
Last edited:

Drael646464

New member
Apr 2, 2017
2,219
0
0
Visit site
Who knows? Maybe it comes down to the fact that English isn't my native language and nobody in my area would agree with your definition? It took long enough to figure out where a large part of the disagreement came from. I'd rather focus on the substance than the terminology, so I already decided to go with "potentially disruptive" instead. That hopefully already solved the problem, so lets move on please.

Well in either case, potentially disruptive is not possible to know for sure either. Cshell and device convergence collectively could be disruptive commercially, it just isn't yet.
 

mikosoft

New member
Jun 15, 2013
64
0
0
Visit site
If it runs in the CPU's kernel-mode then it belongs to the kernel (parts of MinWin + Driver Infrastructure).
That aside, wouldn't OneCore be the better term for what you are describing?

Well, no, I actually did mean what you suggested. I do have some basic education in operating systems :)

You also referee to the shell as being part of the kernel, but that is incorrect.

Well not really, I actually meant the exact opposite. My wording was probably bad, sorry for that, English is not my native language. What I meant is that kernel (and pretty much everything apart from the kernel) is ran by kernel (since one of kernel's jobs is process scheduling). Sorry for the confusion, I think your suggestion I meant OneCore also stemmed from this.

We could get nitpicky and say it's more like 1.5 OSes rather than 2 : 0.5 (OneCore) + 0.5 (Win32) + 0.5 (UWP), but that's probably going to end up being more confusing than helpful.

I agree with all your other points though.

I will agree with you here. I think we both pretty much view it similarly but we're both coming from different angles (I consider a basic kernel with process scheduler and memory manager a complete OS albeit one that is really hard to use :D ) so in the end I don't think only one of us is necessarily right and the other one wrong.
 

a5cent

New member
Nov 3, 2011
6,622
0
0
Visit site
I think we both pretty much view it similarly but we're both coming from different angles (I consider a basic kernel with process scheduler and memory manager a complete OS albeit one that is really hard to use :D ) so in the end I don't think only one of us is necessarily right and the other one wrong.
Okay, got it. That makes it clear where our differences come from. You have a very technical and "bare-bones" view of what constitutes an OS. Nothing wrong with that.

I'm still wondering though, wouldn't you say that my view of what constitutes an OS is a lot closer to how the average consumer views and understands it? After all, most have no idea what a memory manager or scheduler are or what they do. Would you not agree that my view is more useful to most people in terms of helping them understand how OneCore, UWP and Win32 fit together and what the differences are between W10 and W10M. That's what I was going for with the "W10 = two OSes" point.

Don't worry, I'm not going to argue this anymore. I'm just wondering if I need to change my explanation, or of it is close enough to technical reality while still being understandable to most people, and being useful in terms of the point that needs to be made (W10M, not W10, is MS' only personal-comptuing OS that is really just a single OS).
 
Last edited:

Members online

Forum statistics

Threads
323,272
Messages
2,243,571
Members
428,054
Latest member
taylormcintire