In the article, “Systems Past: The Only 8 Software Innovations We Actually Use”, David A. Dalrymple, writes that the major innovations in the tech industry were invented between 1955 and 1970. Since then, his position is that we have been incrementally improving on those concepts by lowering costs, improving performance and enhancing memory usage and optimizing various parts of the concepts.
I find that all the significant concepts in software systems were invented/discovered in the 15 years between 1955 and 1970. What have we been doing since then? Mostly making things faster, cheaper, more memory-consuming, smaller, cheaper, dramatically less efficient, more secure, and worryingly glitchy. And we’ve been rehashing the same ideas over and over again. Interactivity is now “event-driven programming”. Transactions are now “concurrency”. Internetworking is now “mesh networking”. Also, we have tabbed browsing now, because overlapping windows were a bad skeuomorphism from the start, and desktop notifications, because whatever is all the way in the corner of your screen is probably not very important. “Flexible view control” is relegated to the few and the proud who run something like xmonad or herbstluftwm on their custom-compiled GNU/Linux.
Can Bringing More Academic Ideas Give us More Room to Innovate?
Perhaps the solution to this incrementalism is to bring the ideas of computer science academia to the industry, especially the more esoteric ideas. For example, all the benefits of Haskell and type inference should be in mainstream languages. This is in fact starting to happen with Elm providing a better type system for front-end web development.
Though, on second thought, using better type systems and functional programming seem like more of an incremental improvement in terms of programming languages rather than a new innovation.
Can Using Formal Methods Give us More Space for Innovation?
It may be that when we reduce the costs of using formal methods, such as design by contract and model checking with Alloy, we will unlock more room for the abstract thinking that can lead to innovation. As it
is, in the daily life of a programmer (whether a web developer or or
embedded software developer), the main tasks are to piece components
together, find what functionality exists to be re-used and re-combined
and built upon, and to revise the code of others and to write many
many test cases to ensure the code is functioning as correctly as
Potentially there may be a reduction in innovation because we are not
giving developers more space to come up with revolutionary ideas and
not only to come up with those ideas, but to try them out and to
implement them in real-world systems.
Maybe we Just Need to use the Innovations we Already Have?
Maybe we do not need as much innovation, just the knowledge, skills
and time to implement the previous decades’ innovations. In the tech
industry, it is still possible to run across systems that have
absolutely no unit tests and have low quality assurance standards or
that do not use version control. How can we innovate when we still
have to deal with the security issues caused by the C programming
language? How can we innovate when we have created an industry where
the majority of work is in maintaining legacy systems?
How can a Software Developer get Started on Being Innovative?
A good first step toward innovative ideas is for software developers
to consider the bigger picture whenever they work on a project. By
this, I do not mean trying to bundle everything into a framework
explosion of frameworks over the last decade with each stating they
contain brand new advances). Instead I mean keeping in mind that some
of the concepts used to solve everyday problems can be used to solve
harder problems. For instance, if the enterprise is building CRUD apps
or transforming data from one format to another, there must be some
other problem in there that needs solving. This problem could be a
deeper problem that cannot be solved easily by switching frameworks or
just adding a new library. It may require a new way of thinking or a
new algorithm or data structure to solve that the root problem.
Or maybe it is time to clamp down on the other parts of the
development process, and to discourage the poor management and poor
business practices that lead to failures in delivering projects and
lead to low overall quality in the development process. Possibly,
there is a knowledge/project allocation problem where we just need
developers to work on better projects where their talents can be
harnessed. The typical example of that would be letting the
junior-level developers worry about yet another CRUD app, while
allowing the senior developers and chief architects to work on higher
level skills. Maybe we need to go further and allow the bridge the gap
between academia and industry to be formed earlier.
It is interesting to think that the beginnings of the tech industry
spawned many great innovations and to consider that we are merely incrementing on those and re-implementing them.