Where ugly code lies

There are multiple definitions to what software architecture is, notwithstanding that in some areas, the term cannot legally be used. Definitions vary from high level code design to organizational issues. James O. Coplien and Gertrud Bjørnvig came up with a good summary in Lean Architecture.

  • Architecture is more about form than structure. Implementation details are not in the scope. Interfaces, classes and design paradigms are not even considered. Only the general form of the final solution is.
  • Architecture is more about compression than abstraction. Abstractions are in code. They are detailed. The architecture part of the work is about larger scale partitioning into well named patterns, which may have multiple implementations.
  • Much architecture is not about solving user problems. While I don’t fully agree with this one, it’s true that most users will not see the changes right away.

These are high level concepts that have a huge impact on the code. The partitioning that results determines how additions are made to the code and how they will be. There is a direct relationship to the software design and the API available to implementers.

When I look at code in software designed using good techniques, there is typically a clear distinction between some core managing the general process and the extensions following interfaces that are called at the appropriate time. When you look at code inside the core, it really does not seem to do much. There are usually a few strange incantations to call the extension points efficiently and massage the information sent through. The code is not really pretty, but the structure it represents is clean. The glue has to go somewhere.

The leafs, or extension points, are typically a complete jungle. Contortions must be made to fit the interface as mapping is done to an external system. Some pieces were written quickly to serve a pressing matter, fell into a technical debt backlog and eventually out of sight. Code is duplicated around, taken from the older generations to the new ones, evolving over time, except that the ancestors stay around and never get the improvements from the new generations. Quality varies widely as does the implementer’s abilities and experience, but all of the components are isolated and do not cause trouble… most of the time.

Seeing how code rots in controlled environments, I’m always a bit scared when I see a developer searching the web and grabbing random plugins for an open source platform and including them in the code. Disregarding the license issues that are almost never studied, that practice is plain dangerous. There are security implications. Most developers publishing plugins are not malicious, they are simply ignorant of the flaws they introduce.

jQuery is probably the flagship in the category of quality core containing arcade incantations and the jungle of plugins. Surely, having 50,000 plugins may seem like a selling point, but when you consider most of them are lesser duplicates of other ones. Code quality is appallingly low. In most cases, it takes less than 30 seconds to realize they were written by people (self-proclaimed experts) who knew nothing of jQuery, just enough JavaScript to smash together a piece of functionality and branded it as a jQuery plugin for popularity’s sake while following a tutorial. Never use a plugin without auditing the code first.

Even if good care is taken to control the leafs, ugly code will appear all over. There are no other solutions than to go back and add the missing abstractions. Provide the additional tools needed to handle the frequent problems that were duplicated all around. No amount of planning will predict the detailed needs of those extension points. What allows architecture to work is compression, to be able to skip details so the system can be understood as a whole. The job is not done when the core is in place. Some time must be allocated to watching the emergence of patterns and to respond to them, either by modifying the core or providing use-at-will components. It can be made in multiple steps too.

Recently, I was asked to do a lot of high level refactoring in Tiki. Major components had systemic flaws known for a while and many determined they had to be attacked after the release of the long term support version. High level work has several impacts, but sometimes, just providing low level tools can improve the platform significantly. Cleaner code will make the high level changes easier to perform. It only takes a few hours to run through several thousands lines of code and identify commonalities that could be extracted. Extract them, deploy it around. Iterate. Automated tests to support those changes would be nice, but most of the time, those changes are so low level, it’s almost impossible to get wrong.

One thought on “Where ugly code lies”

  1. Perhaps more strongly so in software development than in other economic activities, tomorrow never comes to revisit…quick fixes or whatever. No one can predict the future, and in spite of the spate of tools and libraries and frameworks and such like, development times only increase, exposing any initial overview to corrosive expansion. The ‘code rot’ seems to be an institutional effect related to an organization’s size, whose managers expect compensation based on ongoing process rather than a deliverable and operable end point. Whatever ‘architecture’ is, it too often becomes another facet of a perpetual motion machine.

Leave a Reply

Your email address will not be published. Required fields are marked *