W10 Mobile x86 .exe Emulator

Grant Taylor3

New member
Mar 15, 2014
208
0
0
Visit site
I would not say software now is any heavier. I would say things have gotten lighter because of the drive to be more mobile and battery efficient.

I can remember going from Windows 3.1 to Windows NT 3.1 and needing a 486 with 16MB of RAM to get good performance.

That would be like needing to go from 8GB of RAM for Windows 7 to needing 24GB of RAM when upgrading to Windows 10.

Office 2016 runs well enough on an 8 year old PC.

The need for the average user needing the most powerful CPU and a ton of RAM gave long gone.

So modern software is doing more and not slowing down older systems.
 

Rosebank

New member
Oct 6, 2016
445
0
0
Visit site
There will always be exceptions and typical of a5cent to demonstrate this with his XFlow comments, (which I have looked at in terms of system requirements) Taking this to the very extreme of system requirements, we are not talking extremes here, we are talking medium capability for the Vast majority of users, Photoshop being medium capability, the system requirements of Photoshop are far far less than XFlow. Also an evolving technology so no need to mention extremes.
 

a5cent

New member
Nov 3, 2011
6,622
0
0
Visit site
^ my point had nothing to do with extremes vs. average. My point was that it's WHAT we do that is relevant...not HOW OFTEN (e.g. occasionally). I mentioned xflow just to be 100% sure I have an accurate example. As usual the point went right over your head.
 
Last edited:

a5cent

New member
Nov 3, 2011
6,622
0
0
Visit site
I would not say software now is any heavier. I would say things have gotten lighter because of the drive to be more mobile and battery efficient.



So modern software is doing more and not slowing down older systems.

That's certainly true of mobile apps, but that's not what we're talking about emulating, right?
I don't know what the average use case will be for an emulation-capable device. I suspect LOB software. That is certainly far heavier than it was 20 years ago before JVM's and CLR's started taking over. I agree Word hasn't changed much in the last 8 years, but I was taking your 20 year timeframe as a reference. It certainly has become a lot heavier since then.
 

Rosebank

New member
Oct 6, 2016
445
0
0
Visit site
^ my point had nothing to do with extremes vs. average. My point was that it's WHAT we do that is relevant...not HOW OFTEN (e.g. occasionally). I chose xflow as an example just to be 100% sure I have an accurate example. As usual the point went right over your head.
Don't be so sure your points are going over the top of my head, you revealed yourself to be a software engineer, (pulling rank) on me, you have no clue who you are talking to, that aside you are attempting to pull rank on everyone else too, this includes posters in this thread, Journalists and your general attitude to argue points that have been proven otherwise, (emulation on ARM). I can see you took stock of my words and have been studying the ARM cpu capabilities etc, Much appreciated, Now you have a better understanding of the much broader picture going by your recent postings.
 

a5cent

New member
Nov 3, 2011
6,622
0
0
Visit site
Don't be so sure your points are going over the top of my head.

Can we please stop this. I decided to ignore your posts. I'd appreciate you doing the same. We just don't speak the same language. You are incapable of making a technically precise statement, and I'm incapable of making a technically precise statement that you can't misunderstand.

I actually have done some more research, but as of yet I have not found anything new in regard to x86 emulation on ARM. I still hold the exact same technical positions on ARM and emulation that I held at the start of this thread. None of that has changed. Some helpful additional information was uncovered, in particular your mention of price, but that isn't technical. The fact that you think I've changed my mind, or that I've discovered something profoundly new leaves me no choice but to conclude that you still have no idea what I was trying to explain a few pages back. I just don't care anymore, because you evade/ignore every issue that doesn't fit your own view.

I'm not stubborn. I have no problem changing my mind. I just need data and facts to do so. All the data I've found tell me I've been correct so far. The announcement of a retail device (not a test board) with x86 emulation capabilities running on something older than an SD835 would change my mind. MS publicizing that their emulation capabilities are implemented entirely in software without any hardware support whatsoever would also change my mind. So far nothing I've read has.

Can we stop the nagging now?
 

Joe920

Active member
Nov 13, 2012
1,677
0
36
Visit site
We now return to our regular programming of being excited about this new development in the Wonderful World of Windows.

Please?
 

Cruncher04

New member
Jan 26, 2014
227
0
0
Visit site
I couldn't find anything suggesting that spec2006 was compiled with Intel's compiler for Core M and LLVM for A9X. It seems far more probable that the same compiler was used for both. Do you have a link?

I remember we had a discussion in the Anandtech forums precisely about this issue. Thing is ICC is doing high level optimizations (on AST or DAG level), which prepares the code for SIMD parallelism and emits AVX instructions, where LLVM does not. Outcome is, that you essentially compare the performance of AVX units in x86 with Integer units in ARM because no NEON code was generated. There were other issues, like different pointer aliasing options given to the compiler. What Anandtech did was just asking both Intel and Apple for best options for SPEC and used that.
That would all be fine if they would have put the disclaimer in the article, that they rather compare compilers more than architectures but alas they did not.

What really matters is the design of the silicon behind the ISA, and there aren't many differences left in that area between ARM and x86.

Agreed, but you can only work within the restrictions given. For example Intel will never go to a weakly ordered memory model, because that would break compatibility. They have still translation to microcode with all the downsides, like caching pre-decoded instructions, which are subject to pre-decode misses etc. Another example would be the smaller GP register set (16 vs 32), variable size instruction length, no proper 3 operand instructions etc. There are lots of things holding x86/x64 back.
You might also watch the interview with Jim Keller:
https://www.youtube.com/watch?v=SOTFE7sJY-Q
He clearly states that you can get either more performance and/or higher efficiency with ARM. Keep in mind his team designed and ARM and an x86 architecture in parallel.

Using the same techniques, we'd still reach 50% of native performance, but those 50% now would just represent a lot more computing power than it did back then.

Well 50% CPU performance of Atom in average when emulated on Snapdragon 835 i would call best case/impressive.

However there are few applications, which exclusively run applications user code. So even when emulated, all the calls into the OS/App framework are translated into native calls.

As example, when selecting a menue item in a Win32 application, there are only few lines of (potentially emulated) application code and much of native code in the app framework involved in this task.
There are only few cases where application code dominates, like when running a video editor and you apply video effect.
 

Krystianpants

New member
Sep 2, 2014
1,828
0
0
Visit site
Read post NO1 of this thread, whether you like it or not we are now speaking the same language, all your Technical smoke screens aside you are now talking x86 emulation on ARM, and even considering the future of emulation, I can happily avoid your technical reasons for this or for that, Truth is its happening, You are stubborn a5cent even when the data came to fruition you were reluctant to accept it, now its common knowledge you still wish to find reason it wont work well for general programs and not extremes, I know one thing, I think I will take MS's word for their ability to run this type of Emulation rather than yourself a5cent, Your Technical input has been good but also used as smoke and mirrors, there is no need, speculation is the only thing we can all agree on it seems.

It is happening, but to what extent? So Microsoft shows you Photoshop running on an ARM cpu that it won't even support the tech on. Why? Because when people talk about what phones or ARM cpu's can't do the first thing that always comes up is photoshop. A compiler basically translates higher level code to really low level code that ends up as binary. To emulate 1 program is much simpler as you know all the system calls it makes and the assembler code that is being used. Which registers are being used and how the cpu scheduler behaves. RISC/CISC cpu's aren't that extremely different these days. In fact RISC has developed into incorporating quite a few registers and different tech that is quite different from the original design. It simply keeps the load/store instructions separate. So you can do quite a lot with it especially if Qualcomm adds extra features for the exact purpose of emulation.


There's a lot of different chips in x86 world and a lot of them have many features to enhance performance. A lot of apps may even go very low level to enhance performance on certain features. It's also this ability to do anything you want in windows that makes it so unsafe. Maybe these apps even check if certain cpu features are there and then use them and may have some minimum requirements for optimal performance. So even if the apps are emulated, this small little thing could make them extremely slow as they were meant to use these features. My guess is that MS is testing all the big well known apps and trying to get them optimized. I recall photoshop being run but were there any complex filters applied in the video? Some of that stuff can take quite a lot of processing time even on some nice machines. A lot of newer apps are even using GPUs with code for AMD/NVidia specific features.

If MS wants to be safe they are going to have to virtualize this entire environment. When continuum is triggered it would setup a virtualized area for these apps with all the libraries, etc. I mean win32 is huge. All the different things that it can reference. You will need a lot of space. You will need a lot of memory too.
 

Rosebank

New member
Oct 6, 2016
445
0
0
Visit site
We just don't speak the same language.

Agreed Joe, looking forward to establishing the facts, and the intimate details MS are withholding from us, :)
@a5cent >>Read post NO1 of this thread, whether you like it or not we are now speaking the same language, all your Technical smoke screens aside you are now talking x86 emulation on ARM, and even considering the future of emulation, I can happily avoid your technical reasons for this or for that, Truth is its happening, now its common knowledge you still wish to find reason it wont work well for general programs and not extremes, I know one thing, I think I will take MS's word for their ability to run this type of Emulation rather than yourself a5cent, Your Technical input has been good but also used as smoke and mirrors, there is no need, speculation is the only thing we can all agree on it seems.
 

Rosebank

New member
Oct 6, 2016
445
0
0
Visit site
If MS wants to be safe they are going to have to virtualize this entire environment. I mean win32 is huge. You will need a lot of space. You will need a lot of memory too.
Memory agreed, Space? Do you mean internal storage, then yes agreed too, both will need to be in the region of 8GB and 128 GB respectively, MS wont need to run this entire environment, that is asking too much too soon (if we are talking emulation win32), We need to set Photoshop as an acceptable benchmark/level of satisfactory performance, then from there anything lower is a good chance of working, anything higher (at the moment) might be too problematic and unstable. Even 8GB assumption I make is probably too high, I think it can work on less. ~That is my opinion though, 6GB Ram I think it could work. (Emulation)
 

Cruncher04

New member
Jan 26, 2014
227
0
0
Visit site
Memory agreed, Space? Do you mean internal storage, then yes agreed too, both will need to be in the region of 8GB and 128 GB respectively, MS wont need to run this entire environment, that is asking too much too soon (if we are talking emulation win32),

Not sure what you are talking about. But i already mentioned, that Win32 is all native ARM...there will be no emulated Win32. Native ARM Win32 was already present with Windows RT with 2GByte of RAM. Sure Windows RT had all libraries compiled for Thumb2 instruction set. I expect Win32 will have somewhat higher footprint when compiled for AArch64 compared to Thumb2 but not an order of magnitude. Emulation itself does not have large memory footprint, possibly a few MBytes as translation cache for the translated instructions.
 

a5cent

New member
Nov 3, 2011
6,622
0
0
Visit site
Agreed, but you can only work within the restrictions given. For example Intel will never go to a weakly ordered memory model, because that would break compatibility. They have still translation to microcode with all the downsides, like caching pre-decoded instructions, which are subject to pre-decode misses etc. Another example would be the smaller GP register set (16 vs 32), variable size instruction length, no proper 3 operand instructions etc. There are lots of things holding x86/x64 back.
You might also watch the interview with Jim Keller
I haven't watched the video yet, but look forward to doing so when I have the time. I'll also take a look through the comments section on Anandtech's test. Maybe I'll find something there about what compiler(s) and settings were used. If not, we'll probably have to accept that as something we can't fully resolve.

I have no problem believing that ARM may have some ISA related advantages. I think Intel does too but that's almost besides the point now. I object only to the claim that x86 is so utterly and hopelessly behind that it's laughable to even compare them. That's all I disagree with. Maybe there were irregularities with the compilers used. I don't know. If there were, fixing that may put ARM in the lead, but I don't see any data that would suggest such a lead would be overwhelming. That's all I'm saying. If we can agree that the very latest iterations (and only them) from both camps are at least somewhat comparable, then we're on the same page.

Maybe I'll completely reverse my opinion after watching the video, but that's where I stand now. ;-)

As example, when selecting a menue item in a Win32 application, there are only few lines of (potentially emulated) application code and much of native code in the app framework involved in this task. There are only few cases where application code dominates, like when running a video editor and you apply video effect.
Yeah. TBH, I wouldn't be surprised if many video editor packages utilized the GPU, in which case there'd be barely any x86 emulation occurring even then. Photoshop isn't video editing software but a lot of its filters are run on the GPU, including the filter effect MS demonstrated being not emulated.

There are examples of CPU heavy consumer software however. RTS games with a lot of AI (Starcraft) would be a typical example. It's exactly that sort of thing that I'd like to see before I form my own opinion of what to expect. Until then I'll remain solidly skeptical. The fact that Microsoft's demos at WinHEC appear to have purposefully avoided showing anything that puts any real load on the CPU feels fishy.
 
Last edited:

Rosebank

New member
Oct 6, 2016
445
0
0
Visit site
Not sure what you are talking about. But i already mentioned, that Win32 is all native ARM...there will be no emulated Win32. Native ARM Win32 was already present with Windows RT with 2GByte of RAM. Sure Windows RT had all libraries compiled for Thumb2 instruction set. I expect Win32 will have somewhat higher footprint when compiled for AArch64 compared to Thumb2 but not an order of magnitude. Emulation itself does not have large memory footprint, possibly a few MBytes as translation cache for the translated instructions.
Forgive me I meant Emulation within Win32, I accidentally omitted the word within in my last statement @Cruncher04
 

Cruncher04

New member
Jan 26, 2014
227
0
0
Visit site
There are examples of CPU heavy consumer software however. RTS games with a lot of AI (Starcraft) would be a typical example. It's exactly that sort of thing that I'd like to see before I form my own opinion of what to expect. Until then I'll remain solidly skeptical. The fact that Microsoft's demos at WinHEC appear to have purposefully avoided showing anything that puts any real load on the CPU feels fishy.

Yeah sure, RTS games are more often than not CPU bound. And of course, when presenting a new technology you certainly do not show the worst case.
With respect to CPU-boundness of games you can also look at this from a different point of view. If we look at a modern desktop system with say a 6700k + GTX1070 and compare this to a tablet say with Snapdragon 835 we can conclude that the CPU performance is (give or take) 4x higher but GPU performance more than 10x higher. So issue with tablets is more the GPU and much less the CPU.
Now if we consider that Adreno 540 found in Snapdragon 835 is faster than HD515 found in M-6Y30 i even see the chance that some games running faster even when emulated assuming that GPU boundness is not particularly unlikely.
 

Rosebank

New member
Oct 6, 2016
445
0
0
Visit site
Thanks @Crusher04, these examples (although not exactly) were some of my early findings some weeks ago, and I was actually astonished at the (underuse) performance of the mobile cpu's, This is a new technology so we don't just jump to the deep end, having said that, I am not sure we can expect to see games running faster under emulation, This will never ever be possible.
 

a5cent

New member
Nov 3, 2011
6,622
0
0
Visit site
Yeah sure, RTS games are more often than not CPU bound. And of course, when presenting a new technology you certainly do not show the worst case.
True, but you also don't demonstrate the case that barely demonstrates the technology at all :wink: There needs to be some emulation going on. While the worst case scenario would not be flattering, it would at least have been helpful in regard to judging the state of the technology. As it is now nobody knows what to expect from anything that isn't a GPU demo.

I'm just looking for a reference point. MS could also have demonstrated opening and updating a complicated excel sheet on two devices, one with emulation and one without. A comparison like that would probably have been the most helpful.

In regard to GPU bound games running faster on an SD835 than a Core M, yes, I see a good chance of that for anything that is basically a GPU demo. For anything that puts some load on the CPU (which is what we are far more likely to see in practice), we just don't know. Then it depends on the ratio of GPU to CPU boundedness and the efficiency of the emulation technology... that's all a bit hard to judge from where I stand.

There are extremes at both ends of the spectrum. The extremes I'd like to see demonstrated would be helpful in regard to understanding MS' solution. The other extremes... not helpful at all.
 
Last edited:

Krystianpants

New member
Sep 2, 2014
1,828
0
0
Visit site
Not sure what you are talking about. But i already mentioned, that Win32 is all native ARM...there will be no emulated Win32. Native ARM Win32 was already present with Windows RT with 2GByte of RAM. Sure Windows RT had all libraries compiled for Thumb2 instruction set. I expect Win32 will have somewhat higher footprint when compiled for AArch64 compared to Thumb2 but not an order of magnitude. Emulation itself does not have large memory footprint, possibly a few MBytes as translation cache for the translated instructions.

My guess is that on mobile MS is going to virtualize the environment when in continuum. It shouldn't allow non UWP apps to run while in mobile mode. Of course both the phone and continuum device need to function separately. This is going to need a decent amount of memory.
 

Rosebank

New member
Oct 6, 2016
445
0
0
Visit site
My guess is that on mobile MS is going to virtualize the environment when in continuum. It shouldn't allow non UWP apps to run while in mobile mode. Of course both the phone and continuum device need to function separately. This is going to need a decent amount of memory.

That is a possibility and it makes sense but I think by the time this hits Mobiles (phones) we will have reached a new precedent in terms of Phone specs, I think that 6GB and 8GB phones will be around in a year or 2, (running windows) and by then we will have moved on from the 820 to 835 and beyond, I would personally like to see a Phone run emulated programs, I keep thinking to a commonly quoted example of the person on a train heading to or back from work, and using the phone as a computer running an emulated program to be productive on that journey, this would not need continuum though and if we confine this to continuum I am not sore MS will make any progress with the wow Factor let alone attract more customers.
 

Members online

No members online now.

Forum statistics

Threads
323,197
Messages
2,243,435
Members
428,036
Latest member
jallymonz