The last couple of months have been busy. Last November, I joined/co-founded HireVoice, a company that was setting out to do brand monitoring for employers. Helping companies improve to attract better talent seemed to be a good cause and early feedback seemed to be promising. As the technical co-founder, I started building the solution. Soon enough, we had a decent application in place, ready to be used. It was partial, it had gaps. but it was built to be production-ready. I don’t believe in prototypes and with the tools we have available freely, there is no reason to settle for anything else than high standards. This is when we figured out that getting from people from interested to actually writing checks was not going to be as easy as we had anticipated. Even though we had a good vision, and at the time our advisers we very encouraging, the reality of being a small supplier with no history makes selling hard. Our benefits were not tangible enough.

We moved on to different ideas, pitching all around, trying to find something that would cause more than interest and provide tangible results as soon as possible for the potential clients. To cut the story short, we essentially built 5 different products to various levels over the course of 8 months, only to be back at the starting point. The only difference is that we knew our market a whole lot better.

We came up with another plan. Instead of keeping the focus on perception as we previously did, we went towards recruiting. We followed the money. With some early validation, we figured we could start building a solution again. I took the week-end off. I intended to get started early on the Monday morning, fully refreshed and energized.

Monday morning arrived. I sat down at my computer with my double espresso ready to start. I had my plans, I knew exactly what I had to do. This is usually an ideal scenario. I do my best work in the morning and when I am fully aware of what needs to be done. I got to work and I realized the spark was gone. Even though we had a good plan, it wasn’t good for me. It did not serve a purpose meaningful enough to motivate me into making the sacrifices that come with starting a business. This was no longer about helping companies improve and make people happy at new jobs, it was just about recruiting and perpetuating the same old cycle.

My heart just wasn’t into it. With the motivation gone, I stated thinking about what I had left behind for the past few months. My personal relationships had been affected by it. I had been working a lot. Partially still consulting to pay the bills, mostly on the company, spending most of the time thinking about it. The idea of keeping up that rhythm longer and the consequences that it could bring were simply unacceptable.

I have read a lot of books. Recently, I had read more about business and start-ups. None of them had prepared me for this. How do you tell your business partner you are out? No chapter ever covers this. I had to do it the only way I could think of, say it frankly and openly, and order beers to end it cleanly. I did it the same day, then spent a few hours wondering if I had made the right decision. I did not get much work done in the two following weeks. There is no formula to tell when you should cut your losses. Even if it did not account to much as a company, there is still attachment to what we built and the time we spent.

This makes HireVoice part of the statistics, those 90% of start-ups that don’t make it. However, looking back, I don’t see it as a complete failure. I worked on fun challenges, met many interesting people, and got a better understanding of how businesses operate. Part of the code still lives through an open source project and is being used by a few people. I still maintain it for fun. Somehow, helping a handful of people I don’t know is more motivating to me.

In and out of hot water

Some problems are just too obvious. Blatantly unacceptable. Yet, we live with them because we grow accustomed to them. Dozens of people work on a project and solve small issues in the code by patching things up. That’s just the way it’s done. We all do it not thinking there is an other way. The code piles up, duplication creeps and inconsistency spreads. It happens so gradually that no one notices. The only solution to it is gaining perspective. Recently, I got away from a project I had been with for years. The official time away was really only 3 months. I can’t say I have significantly more knowledge about software right now than I had 3 months ago. Yet, as I came back, I came across a tiny issue. It really was just an other one-liner fix. One of those I had done countless times before. However, this time, it just felt unacceptable. What I saw was not the small issue to be fixed, it was the systemic problem affecting the whole. I can fix that once and for all.

Knowing the ins and outs of a project has some advantages. Changes can be made really quickly because you know exactly where to look at. When you see a piece of code, you can know from the style who wrote it and ask the right questions. You know the history of how it came to be and why it’s like that. Sadly, you also know why it won’t change anytime soon. Or can it? Given some amount of effort, anything about software can change. We’re only too busy with our daily work to realize it.

The same effect applies at different levels. People arriving in a new project have the gift of ignorance. They don’t see the reasons. They see what is wrong to them and how it should be for them. Of course, this often comes out in a harsh way for those who spent a lot of effort making it as good as it is, with all the flaws it has. Anyone who has been working on a project for a significant amount of time and demonstrates it knows that reality can be hard. Still, for all the good advice newcomers can have, what they don’t have is the trust of the community and the knowledge required to know all the repercussions a change can have. Still, they do tend to see major issues that no one had intentions to solve.

At a much smaller level, it’s not much different than closing a coding session at the end of a day in the middle of trying to resolve an impossible bug. The next morning, the issue just magically appear. This can be associated to being well rested and sharper, but it might also just be about letting the time go. Deciding to stop searching is a tough call. You’re always 15 minutes away from finding the solution when debugging, until you realize the hypothesis was wrong. It takes a fair amount of self-confidence to know you can take the night off and that you will still be able to fix the issue in time for a deadline. Sure, getting back into it will take some time. It requires to rebuild the mental model of the problem, but rebuilding might just be what’s needed when you are on the wrong track.

This is nothing new. It has been written many times that code review are more effective when done a few days later or by someone else entirely. In this case, Wiegers indicates that when the work is too fresh in your memory, your recite it based on what it should be rather than what it is, hence you don’t really review it. What if when you search for a problem, whether a localized bug or a systemic issue that requires major work, unless you had taken enough distance to clear your assumptions, there is no way you could find it outside of blind luck?

Summer report

For some reason I never quite understood, I always tend to be extremely busy in the summer when I would much rather enjoy the fresh air and take it slow, and be less busy during the winter when heading out is less attractive. This summer was no exception. After the traveling, I started a new mandate with a new client, and that brought my busyness to a whole new level.

In my last post, I mentioned a lot of wiki-related events happening over the summer and that I would attend them all. It turns out it was an exhausting stretch. Too many interesting people to meet, not enough time — even in days that never seem to end in Poland. As always, I was in a constant dilemma between attending sessions, the open space or just creating spontaneous hallway discussions. There was plenty of space for discussion. The city of Gdansk being not so large, at least not the touristic area in which everyone stayed, entering just about any bar or restaurant, at any time of the day, would lead to sitting with an other group of conference attendees. WikiMania did not end before the plane landed in Munich, which apparently was the connection city everyone used, at which point I had to run to catch my slightly tight connection to Barcelona.

I know, there are worst ways to spend par of the summer than having to go work in Barcelona.

I came to a few conclusions during WikiSym/WikiMania:

  • Sociotechnical is the chosen word by academics to discuss what the rest of us call the social web or web 2.0.
  • Adding a graph does not make a presentation look any more researched. It most likely exposes the flaws.
  • Wikipedia is much larger than I knew, and they still have a lot of ambitions.
  • Some people behind the scenes really enjoy office politics, which most likely creates a barrier with the rest of us.
  • One would think open source and academic research have close objectives, but collaboration remains hard.
  • The analysis performed leads to fascinating results.
  • The community is very diverse, and Truth in Numbers is a very good demonstration of it for those who could not be there.

As I came back home, I had a few days to wrap up projects before getting to work for a new client. All of which had to happen while fighting jet lag. I still did not get time to catch-up with the people I met, but I still plan on it.

One of the very nice surprises I had a few days ago is the recent formation of Montréal Ouvert (the site is also partially available in English), which held it’s first meeting last week. The meeting appeared like a success to me. I’m very bad at counting crowds, but it seemed to be somewhere between 40 and 50 people attending. Participants were from various professions and included some city representatives, which is very promising. However, the next steps are still a little fuzzy and how one may get involved is unclear. The organizers seemed to have matters well in hand. There will likely be some sort of hack fest in the coming weeks or months to build prototypes and show the case for open data. I don’t know how related this was to Make Web Not War a few months prior. It may just be one of those idea whose time has come.

I also got to spend a little time in Ottawa to meet with the BigBlueButton team and discuss further integration with Tiki. At this time, the integration is minimal because very few features are fully exposed. Discussions were fruitful and a lot more should be possible with the now in development version 0.8. Discussing the various use cases indicated that we did not approach the integration using the same metaphor, partially because it is not quite explicit in the API. The integration in Tiki is based on the concept of rooms as a permanent entity that you can reserve through alternate mechanisms, which maps quite closely to how meeting rooms work in physical spaces. The intended integration was mostly built around the concept of meetings happening at a specific moment in time. Detailed documentation cannot always explain the larger picture.

Upcoming events

This summer, I will have my largest event line-up around a single theme. None of which will be technical! It will begin on June 25th with RecentChangesCamp (RoCoCo, to give it a French flavor) in Montreal. I first attended that event the last time it was in Montreal and again last year in Portland. It’s the gathering of wiki enthusiasts, developers, and pretty much anyone who cares to attend (it’s free). The entire event is based around the concept of Open Space, which means you cannot really know what to expect. Both times I attended, it had a strong local feel, even though the event moves around.

Next in line is WikiSym, which will be held in GdaÅ„sk (Poland) on July 7-9th. I also attended it twice (Montreal in 2007, Porto 2008). I missed last year’s in Orlando due to a schedule conflict. WikiSym is an ACM conference, making it the most expensive wiki conference in the world (still fair, by other standards). Unlike the other ones which are more community-driven, this one is from the academic world (you know it when they refer to you as a practitioner). Most of the presentations are actually paper presentations. Because of that, attending the actual presentations is not so valuable as the entire content is provided as you get there. It’s much better to spend time chatting with everyone in the now-tradition Open Space. It really is a once per year opportunity to get everyone who spent years studying various topics around wikis from all over the world. Local audience is almost absent, except for the fact that the event tends to go to places where there is a non-null scientific wiki community.

Final stop will be WikiMania, at the exact same location as the previous one until July 11th. I really don’t know what to expect there. I never attended the official WikiMedia conference. However, it has a fantastic website with tons of relevant information for attendees. It probably has something to do with it being an open wiki and being attended by Wikipedia contributors.

I will next head toward Barcelona for a mandatory TikiFest. However, I don’t really consider this to be in the line-up as it’s mostly about meeting with friends.

That is three events on wikis and collaboration. Wikis being the simplest database that could possibly work, what could require 8-9 days on a single topic? It turns out the technology does not really matter. Just like software, writing is not hard. Getting many people to do it together is a much bigger challenge. Organizing the content alone to suit the needs of a community is challenging. Because the structure is so simple, it puts a lot of pressure on humans to link it all together, navigate the content and find the information they are looking for.

More information overload please

A few months back, Microsoft made a presentation on OData at PHP Quebec. While I found the format interesting at first, with the way you can easily navigate and explore the dataset, I must admit I was a bit skeptical. After all, public organizations handing out their data to Microsoft does sound like a terrible idea. While a lot of that data will be hosted on Microsoft technologies, the format remains open, and it appears to be picking up.

I guess what was missing originally for me was a real use case for it. The example at the presentation used a sample database with products and inventory. Completely boring stuff. Today, at Make Web Not War, I had a conversation with Cory Fowler (with Jenna Hoffman sitting close by) who has been promoting OData for the city of Guelph. I got convinced right away that this was the right way to go. Not necessarily, OData as a format, but opening up information for citizens to explore and improve the city. If OData can emerge as a widespread standard to do it, it’s fine by me. The objective is far away from technology. In fact, when I look at it, I barely see it. It’s about providing open access to information for anyone to use. How they will use it is up to them.

The conference had a competition attached to it. Two projects among the finalists were using OData. One of them created a driving game with checkpoints in the city of Vancouver. They simply used the map data to build the streets and position buildings. That is a fairly ludicrous use of publicly available information, but still impressive that a small team could build a reasonable game environment in a short amount of time. The other project used data from Edmonton to rank houses based on the availability of nearby services, basically helping people seeking new properties to evaluate the neighborhood without actually getting away from their computer.

This is only the tip of the iceberg. The data made available at this time is mostly geographical. Cities expose the location of the various services they offer. The uses you can make out of it are quite limited. I’ve seen other applications helping you locate nearby parks or libraries. Sure, knowing there is a police station nearby is good, but there could be so much more. What we need is just more data: crime locations, incident reports, power usage, water consumption. Once relevant information is out there, small organizations and even businesses will be able to use it to find useful information and track it over time. At this time, a lot of the data is collected but only accessible by a few people. Effort duplication occurs when others attempt to collect it. Waste. Decisions are made based on poor evidence.

So there is information out there for Vancouver, Edmonton, even Guelph. Nothing about Montreal. Nothing in the province of Quebec that I could find. I think this is just sad.

Actually, if there is anything out there, it might be very hard to find. Even if there are these great data sources available openly, it remains hard to find them. There is no central index at this time. Even if there were, the question remains of what should go in it. Official sources? Collaborative sources? Not that there is anything like that, but consider people flagging potholes on streets with their mobile phones as they walk around. Of course, accuracy would vary, but it would serve as a great tool for the city employees to figure out which areas should become a priority. There are so many opportunities and so many challenges related to open data access. I don’t think we are fully prepared for the shift yet.

A bit too much

Well, Confoo is now over. That is quite a lot of stress off my shoulders. Overall, I think the conference was a large success and opened up nice opportunities for the future. Over the years, PHP Quebec had evolved to include more and more topics related to PHP and web development. This year was the natural extension to shift the focus away from PHP and towards web, including other programming languages such as Python and sharing common tracks for web standard, testing, project management and security. Most of the conference was still centered around PHP and that was made very clear on Thursday morning during Rasmus Lerdorf’s presentation (which had to be moved to the ballroom with 250-300 attendees, including some speakers who faced an empty audience), but hopefully, the other user groups will be able to grow in the next year.

Having 8 tracks in parallel was a bit too much. It made session selection hard, especially since I always keep some time for hallway sessions. I feel that “track” lost quite a lot of participants this year compared to the previous ones.

For my own sessions, I learned a big lesson this year. I should not bite more than I can chew. It turns out some topics are much, much, harder to approach than others. A session on refactoring legacy software seemed like a great idea, until I actually had to piece together the content for it. I had to attempt multiple ways to approach the topic and ended up with one way that made some sense to me, but very little to the audience it seems. I spent so much time distilling and organizing the content that I had very little time to prepare the actual presentation for it. What came out was mostly a terrible performance on my part. I am truly sorry for that.

Lesson of the year: Never submit topics that involve abstract complexity.

The plan I ended up with was a little like this:

  • Explain why rewriting the software from scratch is not an option. Primarily because management will never accept, but also because we don’t know what the application does in details and the maintenance effort won’t stop during the rewrite.
  • Bringing a codebase back to life requires a break from the past. Developers must sit down and determine long term objectives and directions to take, figure out what aspects of the software must be kept and those that must change, and find a few concrete steps to be taken.
  • The effort is futile if the same practices that caused degradation are kept. Unit testing should be part of the strategy and coding standards must be brought higher.
  • The rest of the presentation was meant to be a bit more practical on how to gradually improve code quality by removing duplication, breaking dependencies to APIs, improving readability and removing complexity by breaking down very large functions in more manageable units.

As I was presenting, my feeling was that I was on one side preaching to converts that had done this before and knew it worked, and the rest of the crowd who did not one to hear it would take a while and thought I was an idiot (emphasized by my poor performance, which I was aware of).

An other factor that came in the mix was that I actually had two presentations. Both of which I had never given before, so both had to be prepared. Luckily, the second one on unit testing was a much easier topic and I find that one went better. It was in a smaller room with fewer people. Everyone was close by, so it was a lot closer to a conversation. I accepted questions at any time. Surprisingly, they came in pretty much the same order I had prepared the content in for the most part. The objective of this session was to bootstrap with unit testing. My intuition told me that the main thing that prevented people from writing unit tests was that they never know where to start. My plan was:

  • Explain quickly how unit testing fits in the development cycle and why test-first is really more effective if you want to write tests. I went over it quickly because I know everyone had that sermon before. I rather placed the emphasis on getting started with easy problems first as writing tests requires some habits. It’s perfectly fine to get started with new developments before going back to older code and test it.
  • Jump in a VM and go through the installation process for PHPUnit, setting up phpunit.xml and the bootstrap script, writing a first test to show it runs and can generate code coverage. I did it using TDD, showing how you first write the test, see it fail, then do what’s required to get it to pass.
  • Keeping it hands on, go through various assertions that help writing more expressive tests, using setUp, tearDown and data providers to shorten tests.
  • Move on to more advanced topics such as testing code that uses a database or other external dependency. I ran out of time on this one, so I could not make any live example of it.

I was quite satisfied with the type of interaction I had with the audience during the presentation and the feedback was quite positive too. It was a small room organized in a way that I was surrounded by the audience close by rather than in a long room barely seeing who I was speaking to. Although there were only 15 attendees, I am confident they got something they can work with.

I could have used a dry run before the presentation. I had done one two weeks prior, but that wasn’t quite fresh in my mind, so it was not quite fluid, but some of it was desired to show where to find the information.

During the other sessions I attended, I made two nice discoveries: Doctrine 2 which came up with a very nice structure that I find very compatible with the PHP way and MongoDB, a document-based database with a very nice way to manipulate data and that has nice performance attributes for most web applications out there.

Bad numbers

The most frequently quoted numbers in software engineering are probably those of the Standish Chaos report starting in 1995. To cut the story short, they are the ones that claim that only 16% of software projects succeed. Personally, I never really believed that. If it were that bad, I wouldn’t be making a living from writing software today, 15 years later. The industry would have been shut down long before they even published the report. As I was reading Practical Software Estimation, which had a mandatory citation of the above, I came to ask myself why there was such a difference. I don’t have numbers on this, but I would very surprised to hear any vendor claiming less than 90% success rate on projects. Seriously, with 16% odds of winning, you’re better off betting your company’s future on casino tables than on software projects.

The big question here is: what is a successful project? On what basis do they claim such a high failure rate?

Well, I probably don’t have the same definition of success and failure. I don’t think a project is a failure even if it’s a little behind schedule or a little over budget. From an economic standpoint, as long as it’s profitable, and above the opportunity cost, it was worth doing. Sometimes, projects are canceled for external factors. Even though we’d like to see the project development cycle as a black box in which you throw a fixed amount of money and get a product in the end, that’s not the way it is. The world keeps moving and if something comes out and makes your project obsolete, canceling it and changing direction is the right thing to do. Starting the project was also the right thing to do based on the information available at the time. Sure some money was lost, but if there are now cheaper ways to achieve the objective, you’re still winning. As Kent Beck reminds us, sunk costs are a trap.

When a project gets canceled, the executives might be really happy because they see the course change. The people that spent evenings on the project might not. Perspective matters when evaluating success or failure. However, when numbers after the fact are your only measure for success, that may be forgotten. Looking back, it won’t matter if a hockey team gave a good fight in the third period. On the scoreboard, they lost, and the scoreboard is what everyone will look back to.

Luckily, I wasn’t the only one to question what Standish was smoking when they drew those alarming figures. In the latest issue of IEEE Software, I came across a very interesting title: The Rise and Fall of the Chaos Report Figures. In the article, J. Laurenz Eveleens and Chris Verhoef explain how the analysis made inevitably leads to this kind of number. The data used by Standish is not publicly available, so analyzing it correctly is not possible (how many times do we have to ask for open data?). However, they were able to tap into other sources of project data to perform the analysis and come up with a very clear explanation of their results.

First off, my question was answered. The definition of success for Standish is pretty much arriving under budget based on the initial estimate of the project with all requirements met. Failure is being canceled. Everything else goes into the Challenged bucket, including projects completing slightly over budget and projects with changed scope. Considering that bucket contains half of the projects, questioning the numbers is fairly valid.

I remember a presentation by a QA director (not certain of the exact title) from Tata Consulting at the Montreal Software Process Improvement Network a few years ago in which they were explaining their quality process. They presented a graph where they had some data collected from projects during the post-mortem and asking to explain the causes of some slippage or other types of failures (it was a long time ago, my memory is not that great), but there was a big column labeled Miscellaneous. At the time, I did not notice anything wrong with it. All survey graphs I had ever seen contained a big miscellaneous section. However, the presented highlighted the fact that this was unacceptable as it provided them with absolutely no information. In the next editions of the project survey, they replaced Miscellaneous with Oversight, a word which no manager in their right mind would use to describe the cause for their failures. Turns out the following results were more accurate for the causes. When information is unclear, you can’t just accept that it is unclear. You need to dig deeper and ask why.

The authors then explain how they used Barry Boehm‘s long understood Cone of Uncertainty and and Tom DeMarco‘s Estimation Quality Factor (published in 1982, long before the reports) to identify organizational bias in the estimates and explain how aggregating data without considering it leads to absolutely nothing of value. As an example, they point our a graph containing hundreds of estimations at various moments in the projects within an organization of forecast over actual ratio. The graphic is striking as apparently no dots exist above the 1.0 line (nearly none, there are a few very close by). All the dots indicate that the company only occasionally overestimates a project. However, the cone is very visible on the graph and there is a very significant correlation. They asked the right questions and asked why that was. The company simply, and deliberately, used the lowest possible outcome as their estimate, leading them to a 6% success rate based on the Standish definition.

I would be interested to see numbers on how many companies go for this approach rather than providing realistic (50-50) estimates.

Now, imagine companies buffering their estimates added to the data collection. You get random correlations at best because you’re comparing apples to oranges.

Actually, the article presents one of those cases of abusive (fraudulent?) padding. The company had used the Standish definition internally to judge performance. Those estimates were padded so badly, they barely reflected reality. Even at 80% completion some of the estimates were off by two orders of magnitude. Yes, that means 10000%. How can any sane decision be made out of those numbers? I have no idea. In fact, with such random numbers, you’re probably better off not wasting time on estimation at all. If anything, this is a great example of dysfunction.

The article concludes like this:

We communicated our findings to the Standish Group, and Chairman Johnson replied: “All data and information in the Chaos reports and all Standish reports should be considered Standish opinion and the reader bears all risk in the use of this opinion.”

We fully support this disclaimer, which to our knowledge was never stated in the Chaos reports.

It covers the general tone of the article. Above the entertainment value (yes, it’s the first time I ever associate entertainment with scientific reading) brought by tearing apart the Chaos report, what I found the most interesting was the well vulgarized use of theory to analyze the data. I highly recommend reading if you are a subscriber to the magazine or have access to the IEEE Digital Library. However, scientific publications remain restricted in access.

CUSEC 2010

This year, I attended CUSEC for the 4th time, two of which I was an organizer. Even though the target audience is students and I graduated what feels a really long time ago now, I still wait avidly for the event every year. The conference isn’t really ever technically in depth, I see it as an opportunity to see some trends. Every single time, it seems to have a perfect mix. It tends to hook onto the new technologies and hypes. After all, the program is made up by students.

This year, I think everyone would agree that the highlight was Greg Wilson with a very strong invitation to raise the bar and ask for higher standards from software research. Very few quoted studies are actually statistically relevant in any way. I had seen the session before at Dev Days in Toronto (it was around 90% identical). I would see it again. Perhaps, there would be even fewer FIXME notes in the slides. Greg is currently in the process of publishing a book on evidence in software engineering practice to be published as part of the O’Reilly Beautiful * series. The book does not yet have a name, so I can’t pre-order on amazon and that’s truly disappointing.

One of the lower visibility session I found interesting was IBM’s David Turek on Blue Gene and scientific processing. Many discarded the session because it was given by a VP. Now, I don’t really care about scientific calculations. I believe it’s important, but it’s not where my interests lie. I am almost certain I will never use Blue Gene. However, what I found interesting was to see how they tackle extremely large problems. Basically, the objective is to have supercomputers with 1000 times the computational capacity we have today by the end of the decade. Using current technologies, you would need a nuclear power plant to provide it.

Finally, Thomas Ptacek’s session on security was mostly entertaining. It was one of those 3 hour session compressed into one hour. I don’t think I could catch everything, but he went over common developer flaws and how simple omissions can take down the entire security strategies. He concluded with a very useful decision making process: if your encryption strategy involved something else than GPG and SSL, refactor. It’s one of the problems I always had with cryptography APIs. There are too many options. Many of which are plain wrong and irresponsible to use. On the other hand, he was quite a pessimist during the question period, saying there is no hope to create secure software using the current tools and technologies. All software ever made eventually had flaws found in them.

One of the most troubling moments of the conference for me was to see how much some people can be disconnected. I actually came across a software engineering student (not a freshmen) who did not know what Twitter was. Not only did he not know, he had never heard of it. How is that even possible? I don’t use Twitter. I use an open alternative, and I’m not that much into microblogging. However, I do believe it somewhat reached mainstream. You can hear the word while watching news on TV. I really need to lower my assumptions about what people know.

Next conference for me will be, where I will be presenting two sessions and struggling to choose which of 8 sessions to attend every hour for 3 days.

Software education

There is a strong tendency these days to push for a reform of software education. For one, this is quite strange as I don’t consider it has ever settled and has too many variants out there to say they are all wrong, but there is still a consensus that the current state is pretty bad. Looking at the failure rates of software projects, this may well be true. Ever since the birth of the profession, a lot of things were learned about software development. Yet, the average curriculum of programming courses take very little of it into account. Most of the ads you may see focus on programming languages and environments, pretending that you will know all there is to know at the end of whichever amount years, months or weeks they decided to advertise.

There is a disconnection. Languages and platforms have very little to do with the actual skills relevant to software development and the failure rates of projects. This is why there is a call for a reform. I have observed multiple opinion groups.


There has always been a debate to compare software development to other disciplines, trying to find a metaphor to explain what we should be doing. Many of those metaphors lead to apprenticeship and the idea that we should be learning from the work of masters. The basic idea is that we should learn to read before we write. In fact, it does feel absurd that most of us wrote our first lines of code without understanding them. The act of writing it and running the code explained us what it did. Honestly, I think that works pretty well. Code at that level has very little meaning outside execution.

Fortunately, this group actually aims for the higher level benefits. We should be reading significant pieces of code to learn from the design decisions that were made. Learn about the trade-offs. The entire series of Beautiful [Code|Architecture|…] books fit in this perspective. The main issue with this right now is that we don’t have a significant corpus of code we could get people to read reasonably. So few people read code that we can barely begin to identify the good parts. Good being very subjective to start with.

Imagine a first semester course in college where students never touch a computer. They are provided with code that they must read and understand. Exams? Write essays about the code to explain the author’s intent and decisions. Find flaws and boundaries. It certainly looks like a literature course, but it’s the kind of thinking a programmer needs to have when jumping into new code. They must understand the design decisions that were made. Software maintenance is a mess right now because very few people know how to read code. They don’t know how to spot the limitations and they just hack their way through it, corrupting the initial design until it becomes a faint memory from the past.

Reading code requires a special skill called pattern recognition. Unless you can abstract away from the individual lines of code, any attempt to understand a code listing longer than a hundred lines is bound to fail. Chess players don’t see their game as a series of individual movements. They have patterns that they recognize and use to anticipate the opponent’s moves. Although we don’t have opponents (or shouldn’t have), knowing the patterns in code allows to anticipate the attributes that come along with them, including extension points, possibilities and limitations.

Design patterns, which goes well beyond the gang of four initial list, are a way to build a collective intelligence related to patterns. It’s great to know them to communicate with others, but we should also get into the habit of building our own mental models. Maybe someday to write a pattern for it ourselves, or just recognize that it already has a name when we hear from it.

Do it yourself

Almost completely on the opposite side, the DIY group encourages programmers to write their code from scratch. Write their own libraries. Their own frameworks. Clone existing applications. All of this for the sake of learning. You can spend years reading about other’s mistakes and you may be able to remember some of them and avoid them when you encounter the situation. However, a mistake you make yourself is one you will remember forever.

In writing your own code, you learn a lot from the underlying platform. Once you’ve written a framework or two, you will be much more apt to understand other frameworks as you encounter them. You will understand why some things are done in a certain way, because you will have encountered these issues yourself. You will be able to notice details in the design that no one else took any attention to, like a really clever technique to handle data filtering.

The DIY approach leads to specialization. It focuses on learning things in depth. Most open source projects started this way. The most talented programmers you might encounter probably learned this way. However, this is impractical in more traditional education. It takes a lot of time. It requires passion, and that can’t be forced onto someone. Within a few hours that would make for a reasonable assignment, very small libraries or focused tasks can be performed. However, what you can learn from scratching the surface is very limited. Going deep enough to really learn something would take months or years.

Arguably, most of the education today does something very similar to this. Teachers give assignments and students perform them. They are built in a way that the students will hopefully learn something specific related to the course along the way. Longer assignment would lead to more valuable learning, but it would also make evaluation much harder and the scope much wider.

An interesting aspect around the web in this area is code katas. Small exercises that can be made by professionals to practice and keep learning. I think that DIY is really important, but it’s really part of an ongoing personal development rather than something to be done in schools. Let’s face it. At the speed at which technology evolves in all directions, being outdated is a permanent state. The best we can do is make sure we’re not left in stone age.


The process school of thoughts is morphing as we speak. There used to be a time when there was this idea that if you pinned down requirements early on, made a good design and implemented it properly, nothing could go wrong. That’s what I was thought in college in the only course related to project management. These days, it may be XP or some other agile method. It’s still the same school of thoughts. Given a good process, you will obtain good results, so education should focus on teaching good processes and you will obtain good software developers.

Processes are great, but they are mainly a management issue. A process well adapted to the project will allow the team to perform at their best. However, there is a catch. A mediocre team performing at their best is still mediocre. Process management is an optimization issue. It’s about making sure that the ball is not being dropped in a terrible way that could have been prevented because the team had the skills to avoid it. Dealing with optimization before being able to solve the issue is a ridiculous idea. You might as well tell students to wear a suit at work to hide the fact that they don’t actually do anything.

The reason there are so many books on methodologies (luckily, the rate at which they are written has decreased) is that they all work because they were written by people working with great teams. They could have followed no process at all and they probably still would have succeeded.

Being able to follow a process is important for a developer. As they join a team in the real world, they will have to follow the rules to begin with. Basic understanding of the reasons behind them will allow them to perform better and stop them from questioning everything. However, processes can in no way be central to the education, unless you are training managers.

Best practices

These days, this group is a lot into enrolling students in unit testing from the start. Teach them TDD from the start so that they will never think about working in any other way. Once again, I feel this is a lot of what schools have been doing forever. A best practice is very much dependent on time. In college, we started directly in object oriented programming because that was a hot word at the time. Sadly, just writing classes does not really provide with a good understanding of what object oriented really is all about. There is a balance required.

I would agree that TDD is a better stepping stone than just object oriented or whichever buzzword we had in the past because testing is unlikely to disappear in the future, but very few projects are written test first these days. Given that it has a huge impact on the design of software, I fear that it would make those students completely incapable in a typical work place, having to deal with legacy code.

I think that teaching best practices is certainly one of the roles of education, but it should focus on those that overlap technology and have greater odds of surviving. The buzzword of the day is unlikely to be the best choice. Perhaps the buzzword of the last decade that is still around is a more suitable choice. Those the industry does not speak of because they are part of the definition of software engineering. I don’t even count the amount of surprised faces I’ve seen after telling them that people don’t learn to do unit tests in college. That we don’t learn to use version control. We don’t get to package and release applications, or deploy them for that matter.

Using version control, releasing, deploying. Those are skills that are expected in the enterprise. They have been around forever. CVS wasn’t the first version control software out there, and people speak of it as a long extinguished dinosaur. Yet, outside those who contributed to open source software in their spare time, I know of very few graduates who know how to use subversion or any other tool. A large company I’ve worked it had a one day course for all new hires which focused almost exclusively on version control. That means that the absence of the skill is frequent enough and important enough for them for have full time resources assigned to training. Why is that not thought in colleges? I don’t know.

Sure, unit testing, test-first and TDD are great. But perhaps there are more fundamental things to go through before.

But really, is software education all that bad and does it matter anyway? There is some pretty great software out there, and there are excellent programmers who learned without ever going to school. There is certainly a need to shorten the training period from 20 years to 3, but experience will always have value. I’ve learned programming by myself. I must have been 12 or 13 when I wrote my first line of code and probably would have done it younger if I had access to programming tools or internet before that. I could navigate a computer with a DOS prompt before grade school, so I really don’t see why I wouldn’t have started programming earlier given the opportunity. Learning to program is not that hard. I went for a technical degree in college (3 years) and then for software engineering in university (4 years) afterwards and that thought me a whole lot, but I still could program a web forum and knew basic SQL before entering college and what pays my rent is what I learned from contributing to open source.

One can easily learn to program by himself, especially now with an unlimited amount of resources available. If there is one thing school can do, it’s distill all the information to help focus. The real question that should be asked is what education is trying to achieve. I really don’t think that education has anything to do with being a good programmer or professional. That’s a question of passion and attitude. Most book authors I respect probably studied something unrelated because there was no such thing as software education in their time. No matter what you put in the educational system, passionate people will end up on top. Don’t even think that can be influenced or that they can be manufactured.

If the goal is to provide workers (which it probably is, from the government’s standpoint), they should provide a training that allows graduates to hit the ground running in a workplace. That means providing them with techniques to understand large code bases and work in them efficiently, and to teach them to use the tools people really use. Then it’s all about learning to find the information you need, from valid sources, when you really need it. Most code out there does not use complex short path finding algorithms. Sure, knowing those and all the concepts of complexity is useful, but not crucial. When you need it, you have to go back and search for it anyway.

The OPUS failure

Beginning last year, the Montreal metropolitan area begun the deployment of OPUS cards throughout the public transit system. The process was a long and painful transition during which half the gates to the metro stations were using the new system and half were still on the old one, resulting in longer than average queues. At the end of June, they finally completed the transition and stopped selling the old tickets. I should say almost completed, because you can still come across some of the old gates.

The concept of the OPUS card is fairly simple. Nothing the world has never seen before. An RFID card that can be used in the various public transit systems of the metropolitan area. The promise was interesting. Carrying a single card for all public transit needs. Unified, automated machines to purchase tickets, removing the need for interaction with other human beings.

It all started fine. Acting like a laggard, I waited until January to get a card. I could then get my monthly pass for the Montreal island, which served on a day to day basis. I then got train tickets to visit my family once in a while. Everything worked fine thus far. A few months later, they modified the train schedules and they were no longer optimal for my needs. I figured out the good old bus would do a better job. I then tried the machine to get the tickets for the bus. It wouldn’t let me purchase them. It felt counter-intuitive, but I made the line to speak to a human being. The woman did not know what was going on either and searched for a good 10 minutes to figure out what was going on. I could feel a queue of people behind me growing impatient.

She finally gave up and gave me a “solo” card, which is a single-use RFID card containing the tickets. The 1-card dream was over. I was already stuck with two. Worst part is, they both have conflicting signals, so I now had to pull out the card from my wallet to use it. So much for using RFID.

Time came by and this mitigated success held for a few months, that is, until I moved to a more central area. For the first month, I still had my monthly pass, but quickly figured out I was only occasionally using public transits because most of the places I need to go are within a walking distance. Having traveled for the major part of August, getting a monthly pass was certainly not worth it anyway.

So I went out to get tickets instead, which are sold in packs of 10. Went to the machine. Once again, the option was not available. Once again, I reverted to speak to get in queue to talk to someone. The lady told me there was a problem with my card and it had to be “programmed” (one expression that makes me cringe when coming from a non-programmer) and that I had to go to the Berri-UQAM station to get that done. Quite off my route for the day, so I kept that for later.

A few days later, I did stop by the station and went with my card, asking for 10 tickets. I did not mention the previous story with the card having to be “programmed”. That did not feel right. It turns out I got a different response. There was a conflict between the types of tickets. Here is how it goes. Public transit networks are mainly municipal. STM handles Montreal, STL handles Laval, RTL handles Longueuil and a few other smaller ones handles smaller city agglomerations. The AMT is the umbrella organization handling infrastructure developments and operating above ground trains. All networks sell tickets or passes valid on their territory and the AMT sells tickets valid in zones.

It turns out the train tickets I previously bought were not only train tickets, but were zone 3 tickets. What prevented me from buying tickets all along was that those zone 3 tickets would have been valid tickets in the metro and most buses. The only problem is that those tickets cost about twice the price, so I’m not really interested in using them for anything else than the train.

Well, it’s a system limitation. Apologizing, the man at the Berri-UQAM station gave me a second OPUS card (from a single card promise, I now have two bulky cards and a smaller one). He also told me that to avoid the problem, I should purchase the zone tickets on solo cards instead of getting them on my regular OPUS cards if I don’t use them on a regular basis. Here is the problem with this. Solo cards are not distributed by automated machines, and the non-terminal train stations do not have human operators.

My big question is this: Is my use case so complicated that it was not even considered? How is the use case of a young man living in the city and visiting family in the suburbs around once a month a complex use case?

How did this happen? Well, it obviously is a problem of poor engineering work and misunderstanding of technology. They introduced a new technology to “simplify” the interconnection between the transit networks, but did it in a way that prevents any major changes in the way they operate. Make-up on a monkey. Nothing more. They did not simplify the process. They unified it around a piece of plastic and made it harder for everyone to understand what is going on. Even thought carrying a few stacks of tickets was annoying, every time I used one, I knew what was going on. Guessing which one the machine will use for me was out of the equation.

The way this is handled in most cities is that they divide it into zones and you pay based on how many zones you cross during transit. Someone figured out that using the card going in and going out of transit like it’s done everywhere else in the world was too complicated for the citizens of Montreal, even though they had smiling people standing around the stations for months during the transition to help people. They probably would have made a better investment by actually training the permanent staff to understand how it works.

Deep down, there is a technical issue. Without input on where you are going when getting in the transit, or a way to know where you are getting out at, the system is stuck to blindly pick a ticket. The real solution is go get additional input to understand the route and bill appropriately. It did not happen, or it was completely ignored by administrators who did not understand the technical requirements implied by basic logic. To avoid this terrible randomness issue, the engineering solution was to prevent from purchasing conflicting titles on the same card altogether, and do it silently.

The AMT would have been a good solution: use the zones as a means for billing. Unify the currency, not just the payment method. Get rid of the accidental complexity imposed by the various networks without an overall vision. Especially the parts imposed by administrators in the suburbs who never actually used public transits and truly think it’s only for the poor rather than a valid mean for urban transportation.

There was never enough political pressure to give AMT the power they needed to unify the metropolitan networks. Worker unions in the various cities kept on fighting to preserve the jobs, or really just getting them re-organized. They did not bother finding a common financial model and kept on working the way they used to. All of it resulting in a system with more flaws than features. The political issue between the various networks ends up affecting us all.

Attempting to touch as little as possible of the statu-quo, all of the directors ended up agreeing on a common shopping list for the solution supplier. I will skip the open bidding process that typically comes with such projects, but here is a list of the requirements I would expect on such a list based on what I have encountered:

  • Each network must be able to sell tickets and monthly passes valid on their territory.
  • STM also has weekly passes, daily passes and 3-day passes.
  • AMT sells zone tickets or monthly passes. Each zone may comprise one network or part of a network.
  • CIT Laurentides and CIT Lanaudière need to have tickets or monthly passes that are valid in both networks, which can be for the entire networks or only the south portions.
  • Monthly passes for a single bus may be available.

At some point, someone should have realized that these are conflicting and that just asking for the card at the beginning of the trip could not possibly work. Instead of standing up and preventing a disaster, the supplier who could not afford to loose the lucrative contract simply accepted all demands. I don’t know the exact details about this contract, but I wouldn’t be surprised to hear that costs were well overrun.

There is a very simple analysis technique called the 5-Why. For any demand. Ask why to expose the underlying demand. Repeat 5 times. I’m pretty certain most of the complexity from the above list came because citizens complained that they had to carry too much change around and had to pay for every transit. If you have a unified card to pay with, change is no longer an issue.

Getting non-technical people to write down their wishlist is a terrible idea in any system design. It prevents the supplier from being creative and proposing a better, more efficient, solution. Instead, they must build poor systems to meet the illogical needs of people without the depth of understanding required to design a solution.

A bit of traveling around to see what is done in the other metropols of this world might have been useful as well. In my recent trip to London, I encountered the Oyster card. The concept is very similar to our OPUS, except that it was done a bit more efficiently.

  • The card knows about pounds. Not tickets. You just load money on it and whichever transit you take will take what it needs.
  • It knows about your route and charges differently accordingly. If you take a bus, then the tube, it will know it’s the same travel and not charge twice. I think it even caps to the price of a “daily pass” at some point.
  • Pricing on the tube is made based on the distance by asking the card to get in and get out. Short distances are less expensive.
  • You can attach your credit card to it online and configure it to fill up when you go below a certain threshold, so you never have to worry about loading more money on the card.

Of course, this might not all have been possible in Canada. Our privacy concerns are typically higher than those of the citizens of London. London has so many cameras, they wouldn’t really need you to have a card to know your travel’s itinerary and bill accordingly.

I don’t know exactly how it used to be before, but the system seems a lot better to me. At least, they made it simple to use and abstracted away from the need of tickets.

Political issues just keep bleeding through the deployment of OPUS. Some details are not related to technical issues. They are just incomprehensible unless taken with some historical perspective. Coming back from visiting my family, I always had to pay for the metro in Laval even though I had a pass for Montreal and would spend most of the trip there. I could understand that. Even though the metro line connects, it’s actually a different network and I only paid for Montreal. They want to make sure the people living in the suburbs pay a bit more because their infrastructures end up being more expensive.

Now that I had tickets, I figured they would be good from Laval too. The tickets are the exact same price. That did not work as expected. The tickets valid on the same line are different depending on where you purchase them. I did not try, but from prior experience, I would believe I cannot purchase both at the same time. Once again, this is because the two cities could not agree on who would pay the bill for the extension of the network, which was estimated under 200M and ended up costing 750M.

Don’t these administrators think about the madness they place the citizens through with all their internal battles? Typically, when you invest a few millions to improve something, it’s a good time to clean up the multiple hacks imposed by the previously incapable system you are trying to replace. They took the hacks and turned them into specs. Now we’ll have to live with them for the 20 years to come.

Within weeks of completing the deployment of OPUS, thousands of Bixi bikes were placed around the city on hundreds of station. A complete revolution in urban transit they claimed. The creators did their homework and verified how similar projects failed in other cities and managed to avoid errors from the past. There are enough stations scattered around the city (about every two blocks) to make it useful and they attempt to balance the bikes between the stations (with moderate success). However, the company seems to be enjoying success with deployments in a few smaller cities and projects for London and Boston. There is only one issue. Those bikes can’t be used with the OPUS card. So much for standardizing.

Let’s see what the future will bring to us with the re-introduction of the tramway downtown, prolongation of the metro by 20km and the arrival of a train to the airport.