@
mattiasnyc
Wow... I think you managed to misunderstand pretty much  everything in my last post that could possibly have been misunderstood.  Sorry for that. Hope I can clear things up.
No.  Nothing I said was related to market share.
		
 
		
	 
I don't think you can have disruption without at least one major-market-share-holding enterprise being a part of 'it', whatever that feature is. If you want to argue that the iPhone was disruptive then I agree with you. And obviously it had 0% market share at introduction. So, ask yourself this: If it, and any derivative product, had continued to a maximum 1% market share, would it still have been a disruptive technology? Of course not.
Pointing out that market share has nothing to do with it because in the beginning any new technology has zero percent is just silly. That's OBVIOUSLY not the point I'm making.
	
		
	
	
		
		
			MS has no choice but to introduce something unique, that is highly desirable for many people, and which is easily marketable. That's what I call a killer feature. Without such a killer feature MS will never get off the ground in the mobile space.
		
		
	 
If it has nothing to do with market share then why are the above criteria necessary? Why does it have to be desirable and marketable? It's obviously because it needs to sell to be disruptive in the market. So we're really just back to arguing about the definition of "killer feature" and whether or not it includes having a large enough market share to be able to disrupt the market.
So, looking at 
a brief description of "killer" it's pretty much exactly as I interpret the word. ( adjective, Slang. 6. severe; powerful: a killer cold. 7. very difficult or demanding: a killer chess tournament. 8. highly effective; superior: ) It has to do with the intrinsic value of the feature, not whether or not it's adopted by the market (which is required for it to be disruptive), or if it is disruptive.
	
		
	
	
		
		
			In regard to the term "killer feature" we'll have to accept that we define it differently. For  you it applies to anything you think deserves to be very popular. For  me it applies only to disruptive technologies that actually become very  popular and change market dynamics.
		
		
	 
See how you got that backwards? You say I think it applies to what deserves to be popular, but I haven't said that. I merely said that a killer feature is a feature that is great on its own merits. That has nothing to do with whether or not  I think it deserves to be popular. The latter doesn't cause the former. You then move on to talk about the technology being disruptive which, again, you say yourself is changing the market, which means the prerequisite for something being disruptive is it (eventually) gaining a significant market share - i.e. popularity. You're the one arguing that popularity matters, not me, in contrast to what you say above.
	
		
	
	
		
		
			If  I had hours I could fill multiple pages. Even if I limited myself to only  the last 60 years of IT technology I could still fill pages. Obviously  all of that pales in comparison to the enormous number of iterative  improvements made to existing technologies every single day all over the  world. We obviously don't have "more disruption than the status quo".
		
		
	 
Careful though. If you're going to dismiss some technology because it's iterative rather than inventive you'll have to make sure the iPhone actually invented the tech you think made it unique which in turn led to it being disruptive. There were touch screen phones before the iPhone, and there were internet connected phones before the iPhone. 
Your usage of the iPhone as a "disruptive technology" really only works if you view "technology" as "product". There was a market for mobile phones, and one 
product, the iPhone, disrupted that market. Not because of the individual tech I just mentioned, but because it was an appealing product that people wanted. The fact that they bought the device led to a disruption in the market. If it had failed it would still have had the same technology.
	
		
	
	
		
		
			Agreed. The vision is great! The  problem with it is that, again, neither CShell nor Continuum get us  anywhere close to that vision. Both are enabling technologies.
		
		
	 
In your own words: What do they enable?
	
		
	
	
		
		
			In  the interest of making my objection somewhat more practical, we must  only consider that many people will want many of those screens to run  software for other OSes (iOS, Android, OSC, etc). In practice, many  people will be purchasing multiple CPUs either way. That's where this  house of cards comes crumbling down. It can't actually replace the  devices you're saying it will. For the same reason it also lacks the  ability to save customers money.
The only people whom this could  potentially serve are those who explicitly require multiple Windows  devices. That excludes most consumers right off the bat.
		
		
	 
Using dumb terminals to serve any OS is the smart thing to do because it gives you flexibility to move between OS. Rather than buying an iMac you get a screen. Then you hook up your iPhone to it, or your Android, or your Win phone. That to me makes 100% sense. If you then want to argue that users are stuck buying several phones, or several tablets, then fine. My prediction is that people will pick the OS that feels the best and use that throughout their ecosystem. So I don't agree with your prediction. 
	
		
	
	
		
		
			Agreed.  I wasn't talking about cabling. I was talking about the fact that you  actually need separate peripherals, irrespective of how they are  connected. This entire concept only makes sense if the consumer actually  wants multiple screens running Windows software. IMHO most consumers  don't want that. If the average consumer only wants one device that runs  Windows, then there is no point in separating the CPU from everything  else. Most consumers will view that as MS just making things more  complicated than necessary. Consumers who only want one  device that runs Windows, and who don't require the power of a desktop, will prefer a laptop/ultrabook/etc where all the required peripherals come  bundled in one package. That was my point.
		
		
	 
I don't agree with that. You would have to explain just why my friends have an iPhone, an iPad and a Mac desktop or laptop. If they only want one form factor for i/o, why did they buy all of those devices? Why didn't they just pick the most powerful one, the one that satisfies the most demanding needs, and use only that?
I think people gravitate towards different i/o devices because they offer superior convenience and functionality in different situations. You can't carry your desktop with you easily. You can carry your laptop with you, but it's still a bit of a nuisance, and you've then downgraded your screensize. You can bring your tablet no problem, but how nice is it to make phone calls with it? How many people do that? Your smartphone is great; it's portable, it's connected, it can do almost everything.... and yet when you come home do you watch movies and sports on it or do you use a big screen again? Do you do all your excel and word processing on it or do you want a 24"+ screen and keyboard and mouse to go with whatever device you're using?
Consumers want an experience that is fluid in size and type of i/o, yet I guarantee you they'd love it if the experience also was consistent - which is what you seem to agree on.