I should also add, more specifically, that "natural language" has been a general company direction or theme for quite awhile now. About as long as AR has been a focus. I think what demarks this from other companies, and why things are a bit slow right now, is IMO, that is more of an OS level intergration, than a "seperate thing" as assistants tend to operate. Ie, it's supposed to eventually be "an input method". Getting natural language going relates to one of MSFTs acquisitions - it purchased a company that gives sentence word context a year or so ago. But natural language communication and pro-active feedback are both reasonably complex features. It's not likely we'll see any completed version of those for awhile. And thus, it looks like less is going on, much the same way we all wait with baited breath for onecore and cshell - it's a bigger project, it's not that nothings happening.
Imagine if apple users acted like this. Nothing has happened with siri for ages. I personally know apple is cooking up something, but because they don't got blabbing to customers about every feature they are cooking up, users just expect to be surprised.