W10 Mobile x86 .exe Emulator

Drael646464

New member
Apr 2, 2017
2,219
0
0
Visit site
Actually I'm not full sure I'd want combined phone tablet either.

The trouble is you get more power, with more size. 1. tablet - laptop. 2. Phone - media device. 3. Console -desktop - barebones 4. smartwatch - smarthome hub. Those sit in a similar ballpark performance wise. The moment you bridge any of those rough groupings, you're given up performance, storage, battery.

And your then carrying around something just as bulky if you have a dumb tablet screen and dumb laptop shell.

Given phones are getting overpowered for their screen size/usage, perhaps a smartwatch that also operates as a phone (with a dumb phone screen) might make more sense to hybridize?

I get that most people have device redundancy, and it totally makes sense to reduce that, but given how personalized various styles of useage vs form factor are getting, I'm not sure "all in one" will suit everyone.

Probably makes sense just to create a bunch of different between device hybrids, so people can have a few less devices, rather than just one.

Like
- Smartwatch/smartphone/smart home hub/media device/desktop
- smartphone/tablet (folding?)/Media device/desktop
"Gaming" Tablet/laptop/barebones/desktop/Media device

Then your full stationary console/desktop could operate as a media server, and they could all have docks with external GPU, and you just pick whatever one of those performance levels vs portability suits you.
 

mattiasnyc

New member
Nov 20, 2016
419
0
0
Visit site
That's partly because we are reaching nm limits, and moores law is broken.

More cores is better for some applications, but not currently most, even if its coded for.

You'll basically never see the processing fully evenly distributed.

I work with content creation so I actually do see it regularly. Gamers will also see this increasingly moving forward.

I guess if your point is that I won't see this on a small phone-size mobile device because I won't be running desktop apps on it then I suppose that's probably true, but I also suppose that the argument for windows-on-ARM is exactly that; that we'd begin to run more and more fully featured software on these ARM devices.

Say you have 16 cores running 100 MHz and two cores running 800 Mhz - the later is going to be faster outside of software that specifically needs/is designed to run parallel, like neural networking.

Massively parallel is something that will happen in the future, but its better suited for certain applications. Graphics, machine learning. The difference is perhaps something like the difference between logic and creativity/intuition in the human brain.

Some processes really benefit from parallel processing, others not so much.

Right, but even though some won't benefit from using 8 cores, if they use for example 6 cores that leaves 2 more for other things. So in a larger perspective it again makes complete sense and increases "power" of computing. If, for example, I have my mobile running ARM and I come home and dock it, then I want the device to run as much as possible. So if I'm streaming TV/Film and I'm also checking mail on a second screen and/or Skyping, that's several processes. In these cases more cores is better if everything is coded properly, because even if streaming uses only one core, having more than one allows other processes to use the other cores and not having to wait.

I also just checked Intel's i7 generation and their 7700 runs 4 cores at 3.6GHz at 65Watts, whereas the 8 core 6900K at 3.2GHz is at 140Watts. I.e., no energy saving there despite using more cores, rather the opposite, and at the same 14nm.
 

Drael646464

New member
Apr 2, 2017
2,219
0
0
Visit site
Right, but even though some won't benefit from using 8 cores, if they use for example 6 cores that leaves 2 more for other things. So in a larger perspective it again makes complete sense and increases "power" of computing. If, for example, I have my mobile running ARM and I come home and dock it, then I want the device to run as much as possible. So if I'm streaming TV/Film and I'm also checking mail on a second screen and/or Skyping, that's several processes. In these cases more cores is better if everything is coded properly, because even if streaming uses only one core, having more than one allows other processes to use the other cores and not having to wait.

I also just checked Intel's i7 generation and their 7700 runs 4 cores at 3.6GHz at 65Watts, whereas the 8 core 6900K at 3.2GHz is at 140Watts. I.e., no energy saving there despite using more cores, rather the opposite, and at the same 14nm.

Oh yeah, more cores is good. And its good for multi-tasking. My only point was really that for most applications more Ghz is better. That its not a 1 to 1 relationship of cores = power.

For some, as you rightly point out, more cores is better. We will never be able to create truly cognitive machines without FAR more parallel processing than a mere 16 cores. But the more your parallel focused application makes use of individual cores, the less powerful they actually have to be.

And reaching the end of nm processing limits, maybe more cores is all we'll be able to do, unless we come to something new. In view of that, perhaps I should cease making such overly subtle points :p

That's cool you actually get to see even core distribution. I've not seen it happen once yet. Its always a tad less on the other cores. Getting better over time though I suppose. But I am kinda glad someone sees even distribution. Always seems like such a waste.

For mobiles/tablets the biggest issue is heat not batter life. The more physically distributed the CPU is, the easier it is to design for heat. Heat is a major constraint of smaller form factors, especially given lithium batteries additional heat.

I'd guess for tablets liquid cooling will become the norm. Maybe they can eventually scale that to phones too IDK.

I guess if your point is that I won't see this on a small phone-size mobile device because I won't be running desktop apps on it then I suppose that's probably true, but I also suppose that the argument for windows-on-ARM is exactly that; that we'd begin to run more and more fully featured software on these ARM devices.

Not sure exactly what your saying here.

Sidenote: Windows on ARM is first and foremost for tablets, notebooks and servers. Windows on arm, as a smartphone platform, is just a mythical thing that fans have dreamed up. It probably will happen one day, just like the folding screen. Device convergence is definitely a thing, but it's not going to reduce everyone to one device. For myself I'd rather replace a phone with a watch with good voice control or even better a pair of glasses with voice control and AR. It's practically awkward, the smartphone, having to carry this weird glass square, and then squint and pinch at it.

Actual reply:

I guess if you were saying that via the UWP platform, people will start to design applications that cause the high development funding of desktop to trickle down to the low development funding platform of mobile - yeah in the long term you are probably right.

Smartphone users would never fund something like adobe photoshop with their demand for free and "price of coffee" app purchases. But if the platform is shared across mobile, tablet, desktop, AR/VR, console, I can see them writing power apps and AAA games that work on all of them. The increased funding from larger/higher margin purchases will make things possible on a smartphone that everyday users probably don't want to do and even if they did would rarely pay for.
 
Last edited:

Rosebank

New member
Oct 6, 2016
445
0
0
Visit site
It seems with the SD835 they are using the tried and tested "little/Big" process for the utilization of the cores, the faster cores being adopted for heavy end processing and the lower cluster of cores being used for background tasks and such like, no matter which way you look at it the 8 cores all get used and all contribute to the stable functioning of the device, more demand from the system means more load on the cores, this is pretty common sense and helps with power consumption, no point having all 8 cores screaming at full throttle to open up an email however to run paint shop or the "WOW" windows on windows X86 emulation then we can be certain the cores will be working much harder and collectively.
 

mattiasnyc

New member
Nov 20, 2016
419
0
0
Visit site
The more physically distributed the CPU is, the easier it is to design for heat.

Oh, I see what you're saying. You're saying that it's the distribution of heat physically that's the issue, not the absolute total temperature necessarily? I'd agree with that.

I'd guess for tablets liquid cooling will become the norm. Maybe they can eventually scale that to phones too IDK.

Well, this is where my comment about a Continuum-like feature off of a multi-core "smartphone" would be useful, because you'd be cooling your one device only. The tablet wouldn't need this cooling because the processing is done on your phone. Dumb tablet just routes input/output signals essentially (keyboard/mouse/phone/touch etc = in, screen/audio = out).

The Lumia 950/950XL are already water cooled if I understand it correctly.

Not sure exactly what your saying here.

I wrote that poorly. My point was this: You seem to be saying that on current multi-core smartphones the workload isn't distributed well between cores. I'm saying that Win on ARM will enable us to actually run desktop apps on a smartphone that is Win on ARM, and that some desktop apps will hopefully continue to use as many cores as possibly, well distributed, on these phones as well. So if the sales pitch is that we can run Windows on ARM, then the fact that current mobile-programmed apps don't use all cores is irrelevant. What's relevant is what the upcoming apps/usage does to workload distribution.

So the very thing that speaks for Windows on ARM is the very thing that will push users towards usage that benefits from better usage of multiple cores. That's what I mean.

Sidenote: Windows on ARM is first and foremost for tablets, notebooks and servers. Windows on arm, as a smartphone platform, is just a mythical thing that fans have dreamed up. It probably will happen one day, just like the folding screen. Device convergence is definitely a thing, but it's not going to reduce everyone to one device. For myself I'd rather replace a phone with a watch with good voice control or even better a pair of glasses with voice control and AR. It's practically awkward, the smartphone, having to carry this weird glass square, and then squint and pinch at it.

"as a smartphone" should be put into context with what MS has said about that whole device-type. I think it's been said several times that upcoming devices would not fall into existing categories. It's sort of like saying that your watch with voice control is your new smartphone. Is it a "smartphone" or a watch? Who cares, it is what it is.

I actually think it makes some sense, what you're saying. The whole idea that I support is that there's this one device that is mobile/wearable, and it contains the radios for Wifi/cellular/BlueTooth communication etc, and then input/output devices vary. I can see a watch doing that and I can see a smartphone doing that. Just different preferences.

The thing that speaks in favor of a genre-defying smartphone is that size is still an issue, and integration with other devices is still an issue, and so is the user experience. Many people still prefer to hold a device up to their ears to speak to someone (like a phone), and many people still prefer a fairly large screen to browse mail. The transition to a smartphone running windows on ARM that then uses Continuum, even wirelessly, is more likely to happen sooner (in my opinion) than a watch/eyewear AR combo. Not that we won't get there, but I actually think the path might be through the evolution of the phone rather than the other ones.... although there is some precedent I guess with smart watches.

I guess if you were saying that via the UWP platform, people will start to design applications that cause the high development funding of desktop to trickle down to the low development funding platform of mobile - yeah in the long term you are probably right.

Smartphone users would never fund something like adobe photoshop with their demand for free and "price of coffee" app purchases. But if the platform is shared across mobile, tablet, desktop, AR/VR, console, I can see them writing power apps and AAA games that work on all of them. The increased funding from larger/higher margin purchases will make things possible on a smartphone that everyday users probably don't want to do and even if they did would rarely pay for.

I think you're missing the point here. Just consider the following two:

1: I can connect my current Lumia 950 to a computer keyboard, mouse and a monitor. It acts as a computer when I do this, but also with cellular connectivity.

2: With Windows on ARM my regular desktop app would run on my ARMphone as-is.

So, what is the future difference between me docking the phone to those devices (#1) and then running photoshop for example (#2), and on the other hand booting a desktop and running the same software on Windows 10 x86 CPUs? The only difference is computational power, that's it.

The increase in computational power will be dependent on the increase in processing power in the ARMphones, and just with x86 CPUs it'll just keep going. It's the nature of the beast. So really the question here isn't whether or not Adobe is going to write a UWP version of all their software (they could), but rather whether or not users find their ARMphone to be powerful enough to do what needs to be done running regular Windows software.

So again, it's not just Windows on ARM phones that's the point here, it's what happens if you connect your ARMphone to a "dumb terminal". It's exactly the same thing as your watch example. Why would anyone want to read emails on a watch? Well, it's a bad question, because the point is that you could use the slightly larger and more wearable watch form factor to house the processing power and output images on your AR equipped glasses. Conceptually it's exactly what MS has hinted at and what I've said: Instead of just a smartphone your ARMphone becomes a central device, with input/output devices connecting to it.
 

Drael646464

New member
Apr 2, 2017
2,219
0
0
Visit site
Oh, I see what you're saying. You're saying that it's the distribution of heat physically that's the issue, not the absolute total temperature necessarily? I'd agree with that.



Well, this is where my comment about a Continuum-like feature off of a multi-core "smartphone" would be useful, because you'd be cooling your one device only. The tablet wouldn't need this cooling because the processing is done on your phone. Dumb tablet just routes input/output signals essentially (keyboard/mouse/phone/touch etc = in, screen/audio = out).

The Lumia 950/950XL are already water cooled if I understand it correctly.



I wrote that poorly. My point was this: You seem to be saying that on current multi-core smartphones the workload isn't distributed well between cores. I'm saying that Win on ARM will enable us to actually run desktop apps on a smartphone that is Win on ARM, and that some desktop apps will hopefully continue to use as many cores as possibly, well distributed, on these phones as well. So if the sales pitch is that we can run Windows on ARM, then the fact that current mobile-programmed apps don't use all cores is irrelevant. What's relevant is what the upcoming apps/usage does to workload distribution.

So the very thing that speaks for Windows on ARM is the very thing that will push users towards usage that benefits from better usage of multiple cores. That's what I mean.



"as a smartphone" should be put into context with what MS has said about that whole device-type. I think it's been said several times that upcoming devices would not fall into existing categories. It's sort of like saying that your watch with voice control is your new smartphone. Is it a "smartphone" or a watch? Who cares, it is what it is.

I actually think it makes some sense, what you're saying. The whole idea that I support is that there's this one device that is mobile/wearable, and it contains the radios for Wifi/cellular/BlueTooth communication etc, and then input/output devices vary. I can see a watch doing that and I can see a smartphone doing that. Just different preferences.

The thing that speaks in favor of a genre-defying smartphone is that size is still an issue, and integration with other devices is still an issue, and so is the user experience. Many people still prefer to hold a device up to their ears to speak to someone (like a phone), and many people still prefer a fairly large screen to browse mail. The transition to a smartphone running windows on ARM that then uses Continuum, even wirelessly, is more likely to happen sooner (in my opinion) than a watch/eyewear AR combo. Not that we won't get there, but I actually think the path might be through the evolution of the phone rather than the other ones.... although there is some precedent I guess with smart watches.



I think you're missing the point here. Just consider the following two:

1: I can connect my current Lumia 950 to a computer keyboard, mouse and a monitor. It acts as a computer when I do this, but also with cellular connectivity.

2: With Windows on ARM my regular desktop app would run on my ARMphone as-is.

So, what is the future difference between me docking the phone to those devices (#1) and then running photoshop for example (#2), and on the other hand booting a desktop and running the same software on Windows 10 x86 CPUs? The only difference is computational power, that's it.

The increase in computational power will be dependent on the increase in processing power in the ARMphones, and just with x86 CPUs it'll just keep going. It's the nature of the beast. So really the question here isn't whether or not Adobe is going to write a UWP version of all their software (they could), but rather whether or not users find their ARMphone to be powerful enough to do what needs to be done running regular Windows software.

So again, it's not just Windows on ARM phones that's the point here, it's what happens if you connect your ARMphone to a "dumb terminal". It's exactly the same thing as your watch example. Why would anyone want to read emails on a watch? Well, it's a bad question, because the point is that you could use the slightly larger and more wearable watch form factor to house the processing power and output images on your AR equipped glasses. Conceptually it's exactly what MS has hinted at and what I've said: Instead of just a smartphone your ARMphone becomes a central device, with input/output devices connecting to it.

Generally I agree with everything you said there. But computational power is somewhat size limited, and is also reaching bottlenecks at the nm process level. No doubt someone will eventually crack that, but you'll still get larger devices with more power than smaller ones. Sure, at some point a phone will be able to run, say photoshop. But then there will be more demanding software that it can't run, like life-like VR, or complex machine learning (however the later could run on a server and be served). The other limitation is network speeds. In theory, everything could be run from servers, and everything dumb terminals - but that basically requires unlimited bandwidth and zero latency (which would be pretty sci-fi faster than light networking, which may not even be physically possible). Short of that, your always running into either bandswidth, or latency concerns for some functions. Streaming life-like VR for example, probably isn't ever going to be quite the same as doing it locally.

For these types of reasons, and not merely form factor, devices of all physical sizes will have some application in the future. Things will get maybe more mobile, but not entirely mobile if that makes sense. I think also there are physics issues at play like cooling.

Perhaps quantum computing or similar may change this in the distant future, but I am not sure you and I will ever get to see it.

IBM's massively parallel processor might see some play, and there are other things like using light - we might see stuff like that.

For most every day purposes though, and for things like current AAA desktop games and photo-shop, I think we'll see phone sized devices get there in our lifetimes. How quickly, I have no real idea :p
 

mattiasnyc

New member
Nov 20, 2016
419
0
0
Visit site
Generally I agree with everything you said there. But computational power is somewhat size limited, and is also reaching bottlenecks at the nm process level. No doubt someone will eventually crack that, but you'll still get larger devices with more power than smaller ones. Sure, at some point a phone will be able to run, say photoshop. But then there will be more demanding software that it can't run, like life-like VR, or complex machine learning (however the later could run on a server and be served).

I'm not saying small devices will one day do everything. Of course not. But there are frequently limits to what we do as far as productivity goes. There are certain things I do that haven't changed in over a decade as far as CPU requirements go, and other things that have. So from my perspective it's entirely reasonable to think that I in the future will be able to do some of my work on my portable device when hooked up to a terminal.

But as people have pointed out, most people don't do really heavy lifting when they use their laptops/desktops. So, what they really need is devices that are fast enough to do whatever it is they do, and I think to a large degree we're looking at office applications and database apps etc.

So it goes back to the question I asked before; why would someone pay multiple times for processors they aren't using if they could instead just pay once for the CPU and connectivity and then just pick dumb but stylish and perfectly adequate terminals for everything else?

And there are also improvements in security, at least possibly so. Instead of having data on a bunch of different devices, with all of them granting access to it, or to cloud storage, you could unlock your 'experience' regardless of where you are using biometric scans on your ARMphone. Seems more secure to me.

The other limitation is network speeds. In theory, everything could be run from servers, and everything dumb terminals - but that basically requires unlimited bandwidth and zero latency (which would be pretty sci-fi faster than light networking, which may not even be physically possible). Short of that, your always running into either bandswidth, or latency concerns for some functions. Streaming life-like VR for example, probably isn't ever going to be quite the same as doing it locally.

Well, that wasn't really what I was proposing. But I think the issue here is that today, we still have users complaining when using applications like Word or Excel on a smartphone. Not only is the screen too small, but the device (at least older ones) might not be powerful enough to run it smoothly. So, I would say that we're really close to that being a complete non-issue, if it isn't already with better devices, and that a lot of the stuff we do for work can be done now or soon. In addition to that I really don't think bandwidth is going to be an issue. Syncing to the cloud is something we can do relatively quickly on our phones, and cell speeds keep increasing. Heck, I get faster uploads on my mobile than I do my home broadband!

Latency is really only an issue if you think of dumb terminals as streaming data from the cloud and then them having essentially zero computational power. But my point was that we use our smartphone, or ARMphone, to do the processing. So really all you need is either a physical dock like the ones we have already, or a wireless connection. This wireless connection can absolutely be as fast as we need, because many people already use wireless connections to computers for data input, and it's fine.

So again, I'm not saying we're going to run advanced weather calculations on a device in your pocket, or have some render-farm-level AI going on. All I'm saying is that a lot of what most people do most of the time on laptops/desktops is stuff that will be able to be done on phone size devices very soon, if not already. The only question is getting used to it and finding a sweetspot for price/performance.

Incidentally; I think Google moving towards a similar solution where you can operate your Android device using a dumb terminal is exactly what I'm talking about, again. MS did it "first", now Google is doing it. No doubt people will drool over Google's ingenuity and itnore Continuum btw. But either way, I think it's those types of products that will convince people, slowly, that there are better ways to spend their money. I just hope MS can churn out a device that does this relatively soon to get a leg up on this. And the big selling point will be access to Windows 10 software running on ARM, not just UWP apps.

Just my 2 cents.
 

Drael646464

New member
Apr 2, 2017
2,219
0
0
Visit site
I'm not saying small devices will one day do everything. Of course not. But there are frequently limits to what we do as far as productivity goes. There are certain things I do that haven't changed in over a decade as far as CPU requirements go, and other things that have. So from my perspective it's entirely reasonable to think that I in the future will be able to do some of my work on my portable device when hooked up to a terminal.

But as people have pointed out, most people don't do really heavy lifting when they use their laptops/desktops. So, what they really need is devices that are fast enough to do whatever it is they do, and I think to a large degree we're looking at office applications and database apps etc.

So it goes back to the question I asked before; why would someone pay multiple times for processors they aren't using if they could instead just pay once for the CPU and connectivity and then just pick dumb but stylish and perfectly adequate terminals for everything else?

And there are also improvements in security, at least possibly so. Instead of having data on a bunch of different devices, with all of them granting access to it, or to cloud storage, you could unlock your 'experience' regardless of where you are using biometric scans on your ARMphone. Seems more secure to me.



Well, that wasn't really what I was proposing. But I think the issue here is that today, we still have users complaining when using applications like Word or Excel on a smartphone. Not only is the screen too small, but the device (at least older ones) might not be powerful enough to run it smoothly. So, I would say that we're really close to that being a complete non-issue, if it isn't already with better devices, and that a lot of the stuff we do for work can be done now or soon. In addition to that I really don't think bandwidth is going to be an issue. Syncing to the cloud is something we can do relatively quickly on our phones, and cell speeds keep increasing. Heck, I get faster uploads on my mobile than I do my home broadband!

Latency is really only an issue if you think of dumb terminals as streaming data from the cloud and then them having essentially zero computational power. But my point was that we use our smartphone, or ARMphone, to do the processing. So really all you need is either a physical dock like the ones we have already, or a wireless connection. This wireless connection can absolutely be as fast as we need, because many people already use wireless connections to computers for data input, and it's fine.

So again, I'm not saying we're going to run advanced weather calculations on a device in your pocket, or have some render-farm-level AI going on. All I'm saying is that a lot of what most people do most of the time on laptops/desktops is stuff that will be able to be done on phone size devices very soon, if not already. The only question is getting used to it and finding a sweetspot for price/performance.

Incidentally; I think Google moving towards a similar solution where you can operate your Android device using a dumb terminal is exactly what I'm talking about, again. MS did it "first", now Google is doing it. No doubt people will drool over Google's ingenuity and itnore Continuum btw. But either way, I think it's those types of products that will convince people, slowly, that there are better ways to spend their money. I just hope MS can churn out a device that does this relatively soon to get a leg up on this. And the big selling point will be access to Windows 10 software running on ARM, not just UWP apps.

Just my 2 cents.

I agree in the issues surrounding device redundancy. I just see it creating more "merger points" rather than a sort of "one device" for most people.

One thing I see in the immediate future is the explosion of AR/VR as a form of entertainment, like 3d did in film (it took awhile, for sure). That will make nessasary a more powerful computer or console in the home. AI can most likely be served, so that's no biggie for the tablet/smartphone/watch scenario.

I think google is working on a hybrid OS, fuschia. But its still a work in progress, and will suffer similar scaling issues that windows 10 did inititally (where win32 work well enough on a big tablet, they don't scale to phones or smaller tablets well - likewise smartphone apps are a little simple and lightweight for a desktop environment).

Admitedly most people don't use much in the way of win32 apps on desktop, but they do use a lot of peripherals and _some_ win32 or power type apps (particularly games for under 30s, and media type apps).
For which android apps will feel 'stripped down' as will the UI on a desktop.

And theirs enterprise, university and so on. I can understand google wishing to do this for sure, in creating a hybrid OS but the pathway isn't without its obstacles, in adapting ones current platform to the new one.

Even apple when they shift will face issues, despite having successful mobile and desktop platforms (because they'll need apps that do both, not just one or the other and they have x64 and ARM based platforms). Surprising they haven't made one already though given that success, and I think they'd have the best shot at a windows 10 competitor if they did (if it isn't without major issues of some kind. (barring of course their intentional compatibility issues, which in todays world, and even more so in the future of IoT, is kind of an obstacle for consumers)

No doubt apple and google are cooking up a lot of things, we'll see them soon enough (I know apple is rumoured to be working on AR, and facebook VR- the later I expect is some kind of VR social platform, like a "world")
 

mattiasnyc

New member
Nov 20, 2016
419
0
0
Visit site
I agree in the issues surrounding device redundancy. I just see it creating more "merger points" rather than a sort of "one device" for most people.

Just to be clear; I'm not saying there's a redundancy in "devices" from the user's perspective, I'm saying there's a redundancy in the amount of CPUs in a home. So from the user's perspective it'd be a central device (or two), and then peripherals. But those peripherals would be what today are the laptop, desktop and tablet, except instead of all that bulk it'd be the input/output devices. People that like a large screen plus keyboard for checking email won't see a difference. It's still just opening up Outlook using a mouse and keyboard and looking at a screen just like we can do now with Continuum (except with absolute transparency). Reading PDFs or whatever on a tablet too would be the same deal. Looks like a tablet. Acts like a tablet. But happens to just be an input/output device for your WoAphone.

To me that makes perfect sense.
 

Drael646464

New member
Apr 2, 2017
2,219
0
0
Visit site
Just to be clear; I'm not saying there's a redundancy in "devices" from the user's perspective, I'm saying there's a redundancy in the amount of CPUs in a home. So from the user's perspective it'd be a central device (or two), and then peripherals. But those peripherals would be what today are the laptop, desktop and tablet, except instead of all that bulk it'd be the input/output devices. People that like a large screen plus keyboard for checking email won't see a difference. It's still just opening up Outlook using a mouse and keyboard and looking at a screen just like we can do now with Continuum (except with absolute transparency). Reading PDFs or whatever on a tablet too would be the same deal. Looks like a tablet. Acts like a tablet. But happens to just be an input/output device for your WoAphone.

To me that makes perfect sense.

That's basically what I meant by device redundancy. Multiple devices that overlap in function, require syncing and setup and add costs to the consumer. It not only means increased cost, but additional charging, and effort maintaining the files and apps you want.

It's a shame there isn't a more powerful sync solution too, for those devices we do have. Something that syncs, all apps and user folders over local(faster) networks, or even better, via thunderbolt docking.

This is something I'd be quite keen for - a tablet/phone/whatever, that works as a desktop, runs docked, but also syncs apps and files (user controlled, so you can set exceptions, choose options etc), via thunderbolt dock to a more powerful central media/gaming/vr rig.

Or at least something like this, a fast, wired or wireless full sync of apps, settings and file folders you choose. Of course something like this would only work with a hybrid OS like windows.

This way, not only can you reduce your number of devices from 5 or so down to 2 such that you are wasting less cash on device redundancy, but all the devices you do have sync at a deeper level than cloud files folders. Install an app on one, its on the other device.

Timeline is a great start. But it could go further to unify your devices within the windows system long term. Cloud is great, but I'd like to see something working on a faster timescale for bigger loads locally (so for example if I install a game on my tablet, it also runs on my main PC, and updates all the saved and game files, so its game anywhere - basically so the devices are more or less acting like 'the same device' (within their capabilities, clearly the tablet is not going to be able to run AAA VR titles.
 

Rosebank

New member
Oct 6, 2016
445
0
0
Visit site
This seems interesting!! :-(
https://mspoweruser.com/intel-may-b...ing-legacy-windows-apps-on-arm-based-devices/
However, there have been reports that some companies may try to emulate Intel’s proprietary x86 ISA without Intel’s authorization. Emulation is not a new technology, and Transmeta was notably the last company to claim to have produced a compatible x86 processor using emulation (“code morphing”) techniques. Intel enforced patents relating to SIMD instruction set enhancements against Transmeta’s x86 implementation even though it used emulation.
Basically, they will try to block x86 emulation part which is the key technology that will enable this scenario. Intel yesterday posted a press release highlighting the success of x86 for the past 40 years. One of the interesting topics they highlighted was about the IP related to x86 which they own.*They have deep and dynamic patent portfolio with over 1,600 patents worldwide relating to instruction set implementations.
 

a5cent

New member
Nov 3, 2011
6,622
0
0
Visit site
I suspect that is the main reason neither MS nor Qualcomm have released anything specifying how emulation works. The less is publicly known the easier they can frame their technology in the most legally beneficial way in court.

I'd be very surprised if MS and Qualcomm didn't have an army of lawyers working on this long before the first developer got to work. Wait and see...
 

Rosebank

New member
Oct 6, 2016
445
0
0
Visit site
Hey @a5cent, have you seen the flow charts of the emulation process? (ARM),, They do seem quite basic but show layers and where Emulation gets done in the process, I am sure you have probably seen them. I agree with you though I have been wanting more detailed info but it still seems a grey area.
There seems to be quite a large amount of momentum in the last 3 days regarding this PATENT breach etc, its getting coverage everywhere it seems. Weird thing is I have seen a Qualcomm video of Intel v Qualcomm mother board comparisons etc, and the 30% reduced board space. Qualcomm seemed to be talking about boards in production and then suddenly this scandal breaks out!?
 

mattiasnyc

New member
Nov 20, 2016
419
0
0
Visit site
This seems interesting!! :-(
https://mspoweruser.com/intel-may-b...ing-legacy-windows-apps-on-arm-based-devices/
However, there have been reports that some companies may try to emulate Intel’s proprietary x86 ISA without Intel’s authorization. Emulation is not a new technology, and Transmeta was notably the last company to claim to have produced a compatible x86 processor using emulation (“code morphing”) techniques. Intel enforced patents relating to SIMD instruction set enhancements against Transmeta’s x86 implementation even though it used emulation.
Basically, they will try to block x86 emulation part which is the key technology that will enable this scenario. Intel yesterday posted a press release highlighting the success of x86 for the past 40 years. One of the interesting topics they highlighted was about the IP related to x86 which they own.*They have deep and dynamic patent portfolio with over 1,600 patents worldwide relating to instruction set implementations.

Well, brief googling tells me that Transmeta sued Intel, after which Intel counter-sued Transmeta, and that Intel agreed to settle out of court for 250m dollars as well as dropping their suit. So so much for Intel respecting intellectual property rights.

I would also wonder to what degree Intel can really rely on patents if they refer to hardware if the emulator runs in software. From what I can tell, if I buy some hardware I'm pretty much free to do as I please with it. I know people that have bought hardware with proprietary code, and they've set up a "sniffer" to log data when the device communicates with other devices. Then they've taken that log and created custom drivers for the device. Obviously the manufacturer complained but the problem wasn't that it was an infringement on patents - the problem was that the company had deep pockets and threatened to sue. So even if they would have lost the lawsuit this person would have lost far more financially.

And I'm sort of wondering where Intel intends to go with this. It doesn't speak well for Intel if they're engaging in veiled threats like this. Because if they truly had superior products for this market they wouldn't be worried about emulation. So really they're worried about losing market share, which says something about their currents state (and add AMD's recent CPU announcements to that).

Not liking Intel right now.
 

a5cent

New member
Nov 3, 2011
6,622
0
0
Visit site
I would also wonder to what degree Intel can really rely on patents if they refer to hardware if the emulator runs in software.

Except it doesn't run just in software!

MS are claiming near native x86 execution speeds on ARM. That is physically impossible without a lot of hardware support. If that doesn't sway you, also consider that Qualcomm was directly and heavily involved in this project. Qualcomm is not a software company. Last but not least, if it's purely a software solution, there would be absolutely no reason to restrict it to the Snapdragon 835. None. But they do...

Hey @a5cent, have you seen the flow charts of the emulation process? (ARM),, They do seem quite basic but show layers and where Emulation gets done in the process

Yeah, I saw the talk with that chart. It did add some detail (particularly the compiled hybrid PE DLLs were interesting), but the important part that actually does the emulation remains a black box titled "x86 to ARM CPU emulator". That part, the part where all the magic happens, that was completely ignored. :-(

We still know next to nothing about the actual emulator. The only thing I know for sure is that it is definitely not purely a software solution.
 

Rosebank

New member
Oct 6, 2016
445
0
0
Visit site
Agreed with you there @a5cent, but without wanting to reignite an older argument could it simply be the superior CPU capability of the 835 and its cores, this means the HARDWARE that is doing the brute force work, it seems this set up of 835 and adequate RAM seems to be quite miraculous, I write this in response to your "MS are claiming near native x86 execution speeds on ARM. That is physically impossible without a lot of hardware support."
I really cant believe the turn this whole project/situation has taken, I never imagined this at all, and also I am convinced I have seen video footage of Qualcomm stating they have boards in production... Vey interesting developments.
 

a5cent

New member
Nov 3, 2011
6,622
0
0
Visit site
Agreed with you there @a5cent, but without wanting to reignite an older argument could it simply be the superior CPU capability of the 835 and its cores

No.

It would be possible if the SD835 was multiple times faster than a current x86 desktop CPU. It's just not. Just the fact that the SD835 is restricted to a very conservative power and thermal envelope already rules that out, no matter how good the ARM CPU architecture is.

Assuming ISA translation occurs on-the-fly (not ahead of time), which is what I understand is happening, then how many cores the ARM CPU has is also irrelevant. It's all about single core performance at that point. There are many technical reasons for that, but they are neither easily explained nor grasped. Maybe just consider an x86 application that can already make use of four cores, like 7ZIP or WinRAR. If you run that on an SD835, the hardware can't provide more cores than the average x86 counterpart can, meaning you have no extra cores to throw at the problem of ISA translation. To then still claim "near native speeds" means each ARM core must be nearly as efficient as each x86 core. That's only one of many reasons why I've said from the very beginning the the number of ARM cores is completely irrelevant to the problem of on-the-fly ISA translation.

For an ARM core to be nearly as efficient as an x86 core at executing x86 instructions requires some dedicated hardware support. There is no way around it. That's why I stated, at the beginning of this thread, that x86 emulation will not run decently on current hardware. That was a while back, before we knew what the hardware requirements would be. And behold... it has since been revealed that this is limited to the SD835, hardware that is just reaching the market now.
 
Last edited:

mattiasnyc

New member
Nov 20, 2016
419
0
0
Visit site
Except it doesn't run just in software!

MS are claiming near native x86 execution speeds on ARM. That is physically impossible without a lot of hardware support. If that doesn't sway you, also consider that Qualcomm was directly and heavily involved in this project. Qualcomm is not a software company. Last but not least, if it's purely a software solution, there would be absolutely no reason to restrict it to the Snapdragon 835. None. But they do...

No, I get what you're saying, but my point is that I'm sure that between all the engineers and the lawyers things aren't at all as clear as Intel wants to make seem. I mean, if there's a way to enhance a SD processor without it "circumventing" x86, and then allow the software to use those enhancements to emulate x86, then who do you sue? Both?
 

Drael646464

New member
Apr 2, 2017
2,219
0
0
Visit site
Except it doesn't run just in software!

MS are claiming near native x86 execution speeds on ARM. That is physically impossible without a lot of hardware support. If that doesn't sway you, also consider that Qualcomm was directly and heavily involved in this project. Qualcomm is not a software company. Last but not least, if it's purely a software solution, there would be absolutely no reason to restrict it to the Snapdragon 835. None. But they do...



Yeah, I saw the talk with that chart. It did add some detail (particularly the compiled hybrid PE DLLs were interesting), but the important part that actually does the emulation remains a black box titled "x86 to ARM CPU emulator". That part, the part where all the magic happens, that was completely ignored. :-(

We still know next to nothing about the actual emulator. The only thing I know for sure is that it is definitely not purely a software solution.

When they first demo'd it the QUALCOMM person said it was running on an unmodified chip. (820 I believe). I don't think there is any hardware component.
 

Members online

Forum statistics

Threads
322,906
Messages
2,242,872
Members
428,004
Latest member
hetb