There are a couple of issues with your idea:
1. Desktop programs are generally more complex than an "app". They have many configuration dialogs and settings. For touch support, you need controls that are *much* larger than those in a typical Desktop program. That means less information on a screen or some sort of scrolling mechanism.
2. Desktop programs are typically "information dense". They have lots of information on the screen at one time. Look at the ribbons in Office and other Desktop programs. You also see pop up tip windows over UI elements, which obviously won't work with touch since there is no "hover" detection for a fingertip.
3. The mouse is a precision pointing device. I can click on a specific pixel in my Win32 programs if I want. A fingertip is much larger, so you have to do nearest detection to figure out what the user is trying to select if you have multiple small items on the screen.
4. The final nail is that Desktop programs are typically for productivity, which means items #1-3 above are critical to their usefulness.
Note that touch works fine in Desktop programs since at least Win7 (on a touchscreen or a trackpad). Without changing any code my Win32 programs get useful input from touch. I don't handle the touch messages in my window procs, so the messages go on to User32 and get translated into mouse messages which my programs handle. You can pinch to zoom which get translated into mousewheel messages, tap to select which gets translated into mouse button messages, double-tap for action, etc. (I forgot which touch gesture equates to right-click). It's not as good as full touch support but it works.
What I'm doing in my most tablet-friendly Win32 program is to add a "tablet" mode so that it will work better on Intel-based Windows tablets. That will take the main window fullscreen and enable taps on critical UI items that bring up widely spaced dialog items. All configuration dialogs will remain Desktop-oriented with dense controls. The idea is that users can do all the complex configuration at home on a decent monitor with mouse/keyboard and then, when in the field in real-time, they can switch to tablet mode.
MSFT cut their own throats by making the WinRT framework incompatible with Win7. There's no way I could spend the dev time to convert my Win32 programs to the WinRT framework because there were no users/no demand. All they had to do was solve the problems in the existing Win32 API. The good things about WinRT: curated Store, easy install/uninstall, scalable UI, Direct3D, better security. The overwhelming cons: crippled core WinRT API, overly restrictive file/directory access, significant fragmentation between WinRT80 and WinPRT80, obsession with Async, poor performance, inevitable bugs for a 1.0 API, no Win7 compatibility.
None of the good things I listed were impossible to implement on Win7. All they needed was to define a secure subset of Win32, a scalable UI API/framework, and a well-defined program binary system for easy install/uninstall. That could have been provided in a Windows Update or Platform Update for Win7. They could have said "works on Win7, works better on Win8".