I'm not an IT manager. I'm a programmer. But I have friends that work in IT and work for one of the largest print media corporations in my country. They have no problems managing Mac computer. When they buy computers they mostly buy because they need to run software A or software B. And software is made by programmers. So where you have the largest and better prepared pool you have cheaper and better resources to make software. Enterprises buy a lot of custom software so it's normal that they find better cheaper enterprise solutions on windows who has a larger pool of professional than on Apple side. It's the same reason why apps are made first for iphone then for android and maybe for other platforms. The largest pool of trained professionals is found for iphone then for android and so on. This is why autodesk products for modeling (maya and 3dmax) are most used and not open source blender or Houdini or whatever. Because in college people learn autodesk tools. So even if you will considerably save money by using open source alternatives or other cheaper alternatives you won't have who to employ to use them. The pool of professional is the decisive factor in enterprise selection of software and platform.
That's great for your IT buddies. I'm guessing, since they work for a media company, that the only reason they use Mac is because they bought into the thought that (somehow) Mac provides a better experience when working with media. I'm also guessing that by "managing Mac computer" you mean that they make use of Mac OS workstations for the individuals involved in various parts of the media process, but that the company's true infrastructure (if they have one) is not run on Mac. This is true of most companies, partially for the reasons you mentioned. It is true that there's a lack of 3rd party enterprise-level software for Mac OS. However, to say that this is the sole reason Mac isn't popular in the corporate environment is very close-minded.
I'm not looking to get into a "chicken and the egg" argument with you. Simply look at your logic. You say that businesses go with Windows because that's where the technical talent lies. You say enterprises use Autodesk products because that's what's taught in schools. Why is Autodesk taught in schools? Because that's what enterprises use. Why do enterprises use Autodesk? Because that's what's taught in schools. So which came first?
In the case of Windows, there is no such question. Microsoft had a firm hold on the enterprise market from the get-go and this has resulted in much higher 3rd party support for their platform. The comparison that you made to iOS and Android is quite apt. However, on the same token let's take a look at Windows Phone 8. Microsoft is looking to offer something different, which a lot of us believe is better, and using this to gain market share. In gaining market share they gain influence. In gaining influence they are gaining 3rd party support. Apple hasn't done the same with the enterprise market simply because their OS does not fit there. It doesn't allow large enterprises to do what they want to do.
Microsoft always cared for security. But users were disabling auto-updates to conserve bandwidth or I don't know why. True viruses work by exploiting bugs in order to run with the same permission as the kernel and in so doing gaining admin privileges. Compare modern day kernel exploits on windows with other platforms and you will see Microsoft is doing very well. It's the mentality of the user that he wants options and wants to run with admin accounts that makes security a problem. And while my attempt at explaining that the problem with the security is mostly because of user privilege and explaining it in layman terms was twisted by you. I don't care to make an undetectable virus just to steal facebook accounts so I can sell likes. I don't have time / resources to find a zero day vulnerability to exploit for 1 week only. I will make a malware and trick the user to install it. This is how the black hat world was worked for well over 10 years now. So there are no more viruses like in the old day. They are browser plugins or stuff you accept to install. Its more about social engineering than ever before.
Microsoft didn't make an effort to offer active security measures until Windows Defender was introduced with Vista. Sure, they offered patches for their existing OS' holes, but that's hardly proactive. Granted, neither did anyone else, but with the amount of security issues experienced by Microsoft's OS the user perception (one that's still held) became that Windows is an insecure OS. They're taking great strides to try to amend this, but it will take time and is unlikely that it will ever change completely. This is all besides the point, anyway.
I agree with you that the easiest (and, thus, most profitable) way of exploiting security in our modern world is by means of phishing and social exploits. What I'm arguing here is that a simple GUI change shouldn't sacrifice security to the point of ruining an OS, which is what you're stating. In the real world, there's always a trade off between usability and security. After all, the only system that's 100% secure is one that doesn't exist. The issue, then, is providing functionality while still maintaining the desired level of security. Empowering the user to modify the ability of background tasks to run does introduce security risks. If done well, though, there is no reason why it should introduce enough of a threat that it would ruin the OS' security. As a programmer, I would expect you to appreciate this more than I.
Unix / Linux / MacOSX are the same from a kernel point of view. The differences in architecture are minor. The cosmetic look does not matter much when we talk about inner works of an OS. Let the designer and final users care about user interfaces.
You're saying that every single system that's been ever made with a Unix base is the same thing with a different GUI. That's just silly.
If you don't know your stuff don't express it in words. Windows registry has nothing to do with viruses and the fact that windows has a registry has nothing to do with security
.
And you say you make software? Hmm... Anyway, the Windows registry offers the ability to control nearly every aspect of the system with simple commands. Some of the biggest malware attacks on the OS (including many based in social exploitation tactics) are able to do what they do by making unauthorized changes to the system via the registry.
Before you go personally attacking someone about the correctness of their statements, you may want to research the topic.
Look at how Linux is migrating toward a registry approach. Security problems in modern windows days are from the user wanting to run as an admin just to install all plugins and stuff easily. That's the reason they also disable the UAC which is nothing more than a warning that a potential dangerous operation will occur and you are giving admin rights to piece of software.
That's, again, a very simplistic blanket statement. It's partially true, I'll grant you that. Also, AFAIK UAC gets disabled because it's annoying and ineffective. People who shouldn't be allowing changes don't bother to read what they're clicking "yes" to. People who know enough to know what they're doing are just bothered by the extra step.
More frequent live tile updates means you will create bottlenecks in network traffic, User mode / kernel mode switches, internal memory write / read. Bottlenecks creates lag. Test it by starting a larger download from market place and try to see how well all apps work then when there is something in background that creates bottlenecks on network / file access and your foreground app it's not the only one with total access.
There you go again, making huge leaps. Network traffic is a wildly varying beast, don't try to tell me the feature won't be feasible because of this. The other "bottlenecks" you mention are subject to hardware and software optimization and won't necessarily occur.
Look, if Microsoft allowed the option to do this and someone went and loaded 100 live tiles that updated in real time and downloaded tons of information then yeah, I'm sure the OS would run like crap. This, however, isn't a failure of the OS. It is a failure of the user for being an *****. Enable the option, make it obscure, have it disabled (i.e. work the same as now) by default. Users who enable this feature will then be divided in two classes:
1. Those who know what they're doing and won't have the issues you describe.
2. Those who know just enough to think they know what they're doing. They'll probably run into issues and honestly, if they're unable to realize it's their fault then they can go to iOS for all I care.
Restricting options in the name of security will create progress. See current browser dilemma about cross scripting and iframes communications and how those restriction in options due to security created room for the design of new secure ways of dealing with it. Giving options in spite of security problems will create a Nero paradox which was just an example of what happens when no limits are imposed.
Security is because of the user but not always so just means not that we must ignore the user. But that we foremost deal with the user since it's the biggest surface area for attacks. So it's not that there are other area with problems because that is a known fact but that the user situation must be dealt with and not hide under the carpet.
Restricting options in the name of security creates progress.... I don't think so. Since you're taking my thoughts on options to an extreme (back to Nero...) why don't we do the same with yours?
In an extremely simplified platform, the creator dictates exactly how the system is to be used. This is likely going to be far more secure than the opposite, sure, but means that a single entity controls the progress and scope of the platform. Progress can still be achieved, certainly, but will be extremely limited in comparison to a system on the other end.
Progress comes from change. The potential from change is increased when more venues are opened to effect it. In the real world, where we hardly run into extremes, a system that leans more so on the open end is likely to show more progress than those who are not. Whether that's inherently better or not is another debate.
My point was simply that having a mentality that empowering the end user always results in platform-threatening issues is likely to limit the progress of said platform.
Apple proved exactly that, given a pretty looking package and good marketing you can make money. Meaning everyone must take a step back and redesign their strategy. Since apple proved that visually appealing stuff with marketing sell then this means the large mass of consumers don't care about options, functions, complexities. They mostly care about design and support. In order to remain competitive one must adapt to what apple discovered. Having a minor base of power user wanting something else than apple strategy and targeting them only means you have to sell them expensive packages with options in order to break even. Follow what apple discovered consumers want and go with the trend.
While I partially agree with what you're saying I feel that you're over simplifying things again. Microsoft has offered more options and less-than-pretty packages because historically the core of their system came from the enterprise realm. It was then simply easier for them to port their existing system to the consumer market than to make a whole new one that forwent all the options.
I don't believe you know much about windows 8 security and what makes it so great. Remember windows 8 is a hybrid. It's the classic win32 nt kernel and on top of that you get an app silo type of system. Sure having apps isolated with some ways of communicating is more secure but you still have access to the nt kernel so the problem is by no mean solved. So basically taking in account the weakest link windows 8 security truly is right there where windows 7 left it with some improvements but insignificant.
Not sure what you're getting at here. Windows 8 has the most comprehensive built-in security suite of any Windows OS to date out of the box. That's all I ever really said about it.
Blackberry was popular in a time when phones with keyboard were popular and because of the blackberry data plan. It's not like that settings menu full of "options" will do anything for the consumers. And their inability to understand what made them popular in the enterprise world also was there downfall.
Where's your data on this, since you're so confident? Hardware keyboard and BlackBerry's data plan made the entire platform? Sounds unlikely. I feel you misinterpreted what I was getting at. BlackBerry was never terribly popular in the consumer market. It was popular in the enterprise market, though. This was largely due to the level of customization, integration and security that BlackBerry allowed - which was in part due to the host of options they offered.
My "simplistic" view about windows mobile may be seen simplistic for one paragraph on a post in a forum. But it is right and represents the main problem with that platform. Too customizable for performance in mobile world scenarios. Too complex. Too open to modifications by manufacturers meaning a lack of identity a missing app store ecosystem and too much desire by the manufacturers to differentiate themselves loading the phones with shells that were not ideal from performance point of view and you can see why it flopped. Killing it was the right decision. Windows phone at leas comes with a "Metro" user experience design to give identity and to appeal to average customers by ease of use and content before chrome.
We could sit and talk about why Windows Mobile was good or bad all day. It won't bring it back and it certainly won't bring relevance into this topic. All I'm saying is that you're providing your opinion as a fact. While some of your opinions could be right, some are bound to be wrong. Don't pass your opinion off as fact.
Now my biggest grievances with your post is that you still consider than an enterprise server is more critical than a smartphone from a security point of view. This is so wrong. It's not about 1(single, singular, only) smartphone. It's about all smartphones. There are more mobile phones in the world per people than there are toilet. If 1 smartphone is inherently unsecure they all are. Meaning billion of financial accounts and personal data to be used. An enterprise server has also a team o professionals behind whose jobs is to mitigate security while smartphones don't. Hacking PSN as in your example will give me maybe 1 billion names and in the time needed for me to decrypt the credit card information's all that data will have become useless. All I get in the end are a bunch of names. Now hack 1 billion smartphones and you get the ability to profile 800 millions people and to steal 500 millions credit cards. Without raising a red flag. You have unencrypted data which you can use before someone closes that options for you. But humanity needs to learn from cataclysms like it always needed. If by enterprise security you mean stealing other data like corporations documents. Well as a hacker you can't sell them. Remember the Pepsi - Coca cola case when a server was breached and coca cola gave back the files to pepsi(or was the other way around I don't remember). You risk more in court by using data from hacking than by just doing your work.
My point was that you're imposing stricter security constraints on an inherently end-user bound product than you are on an enterprise level system. You're very right about most of what you said here. The issue I have with what you're saying, though, is that while it's easier to target a few end-users and not raise a flag the moment that you start to affect millions red flags will pop up everywhere. While it will be inherently more difficult to access the data of a single, highly guarded system than many poorly guarded systems you also have many more vantage points to deal with on the latter approach. You have millions affected and only one of them realizing what occurred will raise a flag. If a single huge security exploit, as you propose, arose which affected Windows Phone Microsoft would dive head-first into resolving it before it ruined their platform.
Again, I'm not saying that the security of end-user systems isn't important. I'm just in a different realm of thought than you on where the importance lies.
Allowing users to change the interval of the live tile update is a risk. Not a security risk in the sense of malware(that is a discussion about to much openness and options). It's a risk in how a product will perform outside of it's measured performance envelope. How a product will be perceived if by lack of knowledge you make your phone drain your battery faster or having lag, or how such an options will allow for apps to track gps position from 5 minutes to 5 minutes and to start spamming geo advertisements.
Ranting or not just please keep this in mind. In a maximum of 2 years a new type of spam(adds if you want) will emerge. The geo adds. The one that will be related to your GPS position. And then you tell me how happy you will be that google and Microsoft and apple bowed to the "people" and gave them unreasonable options so that others can abuse of those options.
I'm going to cease arguing with you on this subject simply because you are an extremist. At every turn, each of your arguments are based on snowball effects which you presume will occur in your presumed approach that a company would presumably take. What if, like I said, the option is allowed but it's use is obscured and not enabled by default? Then the majority of users, the ones you're mostly concerned about, will be completely unaffected. Power users are given the option and expected to understand the (potential) consequences. If they don't then that's their business. We're a minority anyway.
You're at least partially right. Introducing more options for customize-ability to a system does pose an increased security risk. To jump from that to conclude that it will destroy the system (or the user experience, or whatever) is to assume that many other things - such as implementation - have failed. After all, we're not talking about a major change here.