There is no such thing as democratization of technology. If you think about it, this is quite obvious. Everything being sold as democratization of technology is commodification with glitzy marketing sprinkled on top.
What happens at the development level is that saturation has been reached, a specific kind of technology itself has matured and the only thing that’s left to do is increase the level of integration. This, however, comes with a cost. One example that everybody (in the western world) experiences every day is computer (including smartphones). You see, back in the days of DOS (and, before the Apple historians start howling in protest, the Apple II), when there was no such thing as a Graphical User Interface, processing power, or rather the lack thereof, was a huge (or tight – sic!) bottleneck (as were memory and basically everything, but that’s not my point…), so programmers would have to work very hard to develop and optimize their code so it would execute as fast as possible. This was a paramount objective since the beginning of the PC era which lasted until the mid-90s when the 486 and Pentium came along. Back then, Michael Abrash, who worked for companies such as Microsoft and id software (at which he played an important role in developing the game-changing (sic!) Quake engine), wrote:
GUIs, reusable code, portable code written entirely in high-level languages, and object-oriented programming are all the rage now, and promise to remain so for the foreseeable future. The thrust of this technology is to enhance the software development process by offloading as much responsibility as possible to other programmers, and by writing all remaining code in modular, generic form. This modular code then becomes a black box to be reused endlessly without another thought about what actualy lies inside. GUIs also reduce development times by making many interface choices for you. That, in turn, makes it possible to create quickly and reliably programs that will be easy for new users to pick up, so software becomes easier to both produce and learn. This is, without question, a Good Thing.
The “black box” approach does not, however, necessarily cause the software itself to become faster, smaller, or more innovative; quite the opposite, I suspect. I’ll reserve judgement on whether that is a good thing or not, but I’ll make a prediction: In the short run, the aforementioned techniques will lead to noticeably larger, slower programs, as programmers understand less and less of what the key parts of their programs do and rely increasingly on general-purpose code written by other people. (In the long run, programs will be bigger and slower yet, but computers will be so fast and will have so much memory that no one will care.) Over time, PC programs will also come to be more similar to one another-and to programs running on other platforms, such as the Mac-as regards both user interface and performance.
Again, I am not saying that this is bad. It does, however, have major implications for the future nature of PC graphics programming, in ways that will directly affect the means by which many of you earn your livings. Not so very long from now, graphics programming-all programming, for that matter-will become mostly a matter of assembling in various ways components written by other people, and will cease to be the all-inclusively creative, mindbendingly complex pursuit it is today. (Using legally certified black boxes is, by the way, one direction in which the patent lawyers are leading us; legal considerations may be the final nail in the coffin of homegrown code.) For now, though, it’s still within your power, as a PC programmer, to understand and even control every single thing that happens on a computer if you so desire, to realize any vision you may have. Take advantage of this unique window of opportunity to create some magic!
Which has proven to be still holding true 15 years later. And we see this not only in software, but also in hardware, as I mentioned before. Likewise, a new generation of users pops up, which I refer to as hacks. A hack is not necessarily a bad person, but easily perceived as such by the established players in a given area of competition. You see, broadcast engineering used to be (and still is, when keeping a technical minimum standard) a very complex field, which is why it’s still engineering and not play-as-you-go. Nevertheless, there are companies which go the aforementioned way of integrating mature technology and bringing it to the market for a price “everyone” can afford. Which correlates to the frameworks Abrash talks about, it is not a bad thing. The bad thing is that people buying this technology think it keeps up with the standard, which it does on the paper, but it’s just not as reliable, durable and serviceable as “the proper stuff” is. Nevertheless its users enter into the competition with the established players, which in a market, which is largely driven by price, creates unreasonable expectations which in return lead to ludicrous pressure within companies who see their market share flounder.
This is largely because people, in general, know less about more, which is based on the assumption that you don’t have to know how something works, you just have to know how to use it, which ironically enough, is propagated by the aforementioned pressurized companies—expert knowledge is costly, and costs are to be driven down, not up. And so the cycle is completed, as there are now even more hacks competing with other hacks about price, while the customer acts as a catalyst to all of this.
The only solution, of course, is to step away from the idea that every battle must be won at any cost. And to step away from the assumption that your customer’s only decision factor is price. It is in many cases, but my experience is that customers always appreciate service over price. And in my area of work, you’re only able to offer good service if you’re competent, able to fix problems, being a professional.
Professionals are able to create magic time and time again, hacks are not, or only by accident.