Reaching for simplicity

In designing a solution, it’s always a good thing to check out different options. In many cases, problems can be solved with a complete hack or complete gold plating. Both are terrible, but it’s important to visit those options and try to find a middle ground.

Where does your average solution fit in?

  1. 2 hours
  2. 2 days
  3. 2 weeks
  4. 2 months
  5. 2 years
  6. never made it to release

The time frames definitely depend on the technologies and application domains. I personally like the 2 days to 2 weeks range. If I can’t get a proof of concept and a base architecture in 2 days, the design is probably bad. If it can’t be completed in 2 weeks, it probably could be simplified even more.

Everything I worked on this year fits in this range. Short, high impact, high value, fun. I just hate wasting time on long projects. I may just be short sighted, but I like to see results fast.

There are probably cases where a more polished solution that what can be made in two weeks is required, but these should be exceptions. If you are to embark in a long project, make sure it’s for the good reasons. Make sure you explored the lightweight solutions from the lower scales before and that the benefits you get from the better solution are worth the 5x cost increase.

Is the only reason you feel like going up is that it would be fun to use new cool technologies? Go back to the academic architecture guidelines: what are your desired quality attributes? Do you need that much extensibility? Is performance so critical? Put on the executive hat: How much is it really worth? What could be sacrificed to fit the budget and bring the most value?

Little over a month ago, Nelson pointed me to Deki Extensions. The really nice thing about them is that they can be used to call webservices and really facilitate writing extensions to the wiki syntax. Tikiwiki already has plugins, which is somewhat the same concept as extensions, but they don’t allow webservice calls. The big advantage of such remote plugins is that it allows to integrate content from external systems really nicely without having to modify the code base. Think as a use case to load up bug tracking information from BugZilla as part of a wiki page to complement the discussion.

There were really two opposite solutions to this one:

  • Write a webservice plug-in to do an HTTP request and dump the output on the page (2 hours)
  • Support the Deki Extensions altogether

Deki Extensions are amazing. The problem is that to support them, you basically need to support the DekiScript language that runs in the wiki page and emulate their environment. There may also be legal issues. Are we even allow to support it? After implementation, we would always have to play catch-up as they evolved the specifications. Then would come incompatibilities, and we would have make sure all extensions out there are supported. Implementation would be long and painful.

The webservice plugin would do the job, but it really isn’t any elegant and it’s completely unsafe as far as XSS goes. Not really any useful in the end unless you fully trust all potential contributors to a page. Did I mention this is to run in a wiki? This solution is completely useless.

Something decent has to be somewhere in the middle. Let’s break down Deki Extensions and see what they are all about:

  1. A way to embed special content in a page
  2. Remote execution through a custom exchange format
  3. Possibly structured data output to be manipulated by local, user-defined, execution
  4. A registry to map remote services to local “function” names

Broken down that way, it looks a lot simpler. We already have an architecture to run custom code in a page called plugins. There are multiple standardized exchange formats out there, like JSON and YAML; we don’t really need DekiXML. A language to manipulate output really looks like a template engine. There are quite a few of those out there that can provide the necessary sandbox. The registry is really not complicated.

It does seem like it can be brought down to my preferred project size range by using existing components, which also has the side effect of reducing risk by multiple folds. It also starts to shape up to a standard exchange format, doesn’t it?

The End of Design By Committee

The W3C always had great intentions. The goal has always been to create great standards encompassing for all possible situations and respecting all special needs. In the early days, great standards still living today were created, like HTML and CSS. Of course, there were problems. It took years for standards to be supported correctly because of ambiguities. Facing those problems, they decided not to make standards official until they were fully supported by enough implementations. XHTML 2, CSS 3 never saw the light.

Standards like RDF never became what they were meant to be nearly 10 years after the reccommended proposition. SOAP and WSDL are huge buzzwords in the SOA world, but it never quite works as well as it should. Implementations are still incompatible and subsets of the spec need to be used for communication to be handled properly. Not to mention there are still no traces of XLink, XPointer or XForms anywhere in the ecosystem. All these specifications appeared in the early 2000s or late 1990s. Who were they made for?

The big problem with all of them is that they are so abstract that no one outside the committee who designed them can understand their purpose, let alone any idea of trying to support them. The specifications are too large. Too complicated. Building around XML probably wasn’t the best idea ever. It really is unlike anything else and painful to work with, unless you use even more XML technologies. It does not map well to common programming paradigms. It was only ever similar to HTML and SGML. Maybe those should have been taken as exceptions rather than the rule.

I consider the best specification build around XML is XPath, but only because it removes all the burden of managing XML and it’s not XML-based. CSS is great for formatting HTML, again, not XML-based. XSL-T is not too bad because it plays nice with HTML, but I find some other techniques like Zope’s TAL to be a lot more elegant. It extends XML without adding to the tag soup.

By ignoring all the details, APIs like SimpleXML allow you to read XML seamlessly, but writing it is a completely different task. XML works everywhere, but it’s always alien to the environment.

Recently, I have noticed that the Web started to regain it’s original nature. Standards are emerging rather than cultivated. The days where companies assigned employees to a consortium in order to write a specification are over. The W3C is still working on their specifications, trying to get them out the door, but nothing new started in a long while. During that period, we got to see great standards establishing themselves, not because they were supported by the industry, but because they were good.

Think about JSON and YAML. JSON is a subset of JavaScript that is well specified, easy to understand, easy to read for a machine and easy to write for a machine. YAML is a human readable format that is formal enough to be parsed by a machine. What do they have in common? They map to programming concepts. All scripting languages out there can load them in a single function call into their internal formats and rewrite them just as easily.

In the end, the problem can be distilled down to a single preference. You can either write a complex specification, encompassing for all possible cases, and spend months implementing it and making sure it’s compatible, then spend 5 minutes configuring it to perform all the magic. Or you can have a simple data exchange format, hook it to a scripting language and spend anywhere from 15 minutes to a few hours to do what you need. On the large scale, complex standards are worth it, but in most cases, they are a waste of time.

One of the great aspects of web development is that there are so many problems, there are thousands of people thinking about them. Over the years, a great ecosystem of tools and techniques was built. These days, all you need to do is piece together existing components. HTTP is a good environment to make requests and get responses. Data serialization is available. All you need to choose is decide how to use them. Recently, Identi.ca/Laconi.ca wrote a small specification for open microblogging. OEmbed allows to export the location of images and videos. When you look at those, the first thing that comes to your mind is: Why hasn’t anyone thought about it before?

It doesn’t have to be great. It doesn’t have to be so smart. We only need to agree on something, or do something and get others to follow. There is nothing religious about saying which field name you will use to contain the location or the size of an image. There is no need for namespaces and extendability. It does not even deserves debates or discussions. Just decision making. It’s a simple problem and it deserves a simple solution.

The specifications fit on a few sheets of paper. They can be read and understood by anyone who cares without investing significant efforts. Simple use cases can be illustrated. People get it.

There are so many ways in which different websites can’t talk to each other, which makes it painful to develop applications and forces people to re-implement the same things over and over again. In the new Web-SaaS-driven world, it’s a shame. Especially since the underlying protocol does not prevent anything. It’s just that one took the time to write down the problem and write down the simplest solution that could work.

Sure, you could go out and write something generic that solves everything (it would probably end up looking like RDF). In the end, unless you know what you’re searching for, there is no way you will find it. Abstract tokens don’t help anyone.

I’m currently writing my own spec (more information soon). What are your problems in integrating with other applications?

Note to self: This post contains too many acronyms and references. I should look these up and link in the future.

Ease out transitions

Most software design out there is a matter of personal taste. There are very few widely agreed upon rules. It happened to all of us. You get to read a particularly bad piece of code and think it requires a complete rewrite. In most cases, it wouldn’t be hard to get people to agree with you. Rewriting would make everything more beautiful and allow easier modification. However, it has a terrible cost. It will always take longer than you expected. Bad code has this ability to hide features inside. While it may happen that some portions of code are dead, most of the time, they serve a very specific purpose your great new design wouldn’t have considered.

Major backend rewrites also tend to leave the front-end part behind. It will break user experience. All the polishing that was made on the interface is likely to be gone because it was not re-implemented or simply left broken. It would probably take months to reach the same level of external quality (compared to days or weeks to improve the internal quality).

When you are at point A and think it’s not the best place to be, there is nothing wrong with trying to go to point B. However, teleportation does not exist in the world will live in, and your app won’t just appear in point B on the next day. Even if you rewrite everything, get a perfect backend and a better front-end. All this legacy data won’t just transfer itself. Data conversions are a pain. One of the reason the code was so bad in the first place is probably because the data model was messed up. Converting data for all the edge conditions is a very long process and is always prone to break, which leads to users complaining. Of course, this implies that your upgrade process even has a way to handle data conversions as part of the regular upgrade process.

Before thinking about your grand new design, think about transition. It will probably expense more effort than the development efforts. After you rewrote everything, will you be able to make it from A to B?

Recently, I started working on restructuring the wiki plugin API in Tikiwiki. The plugins are great. They allow to find the different features in the wiki and create create applicative wikis. The problem is that they are hard to use. The best ones have too many parameters and not all of them are really well documented in the UI (while sometimes documentation is better on the doc site). When too many parameters are used, it just becomes unreadable. I decided to rework these during TikiFestStrasbourg after all of us learnt one new thing about plugin capabilities during discussions.

These were some of the issues:

  • Documentation in UI was a short blob of text containing HTML. It was not meaningful to users and a pain for translators to manage. Parameters were rarely detailed.
  • Plugin list was not filtered. All plugins were listed, even if related features were turned off. This created too much noise.
  • The syntax was hard to understand and a user interface would help a lot.
  • Caching does not behave nicely with some plug-ins.

To solve these issues, more meta-data is required about the plugins. The naive solution is to rewrite the plugin API entirely and make it good. After all, at this time, each plug-in is stored in a separate file containing two functions with a naming conventions. This is not a modern GoF-endorsed design!

Well, rewriting is a bad idea. Tikiwiki ships with around 75 plugins, plus a few in a separate folder to be enabled by sys-admins in controlled environments, plus an unknown amount in mods, and all those custom plugins written for specific applications we have no control on. If we only had to consider our own work, it would still take around 40 hours to convert them all, considering no clean-up is made and documentation is only entered as-is without improvements. Rewriting the API would be a 3-4 hour job at most. The conversion is uncertain the result would break upgrades on all customized installations.

I’d much rather compromise a little bit on beauty to save some pain. Adding an extra function to provide all the meta-data and making sure that the functions who use these act conditionally based on if the new way of doing things is available. I can keep working on improvements without having to convert everything at once. No functionality is broken.

Of course, the transition still has to be made, but at least it provides us with more time to do it. No one will scream that their favorite plugin is broken. I will be able to merge back in trunk much faster and get help from the rest of the community.

There is more than just code involved.

Motivation Driven Development

Today is just one of these days I can just look back and laugh at my own behavior. I have been working on a personal project for a while now (should go public soon). Of course, it started off with many great ideas and I could have fun just thinking about it. When came the time to actually code it, motivation dropped. The problem was really that, while I had a great goal, before getting close, I had to get all the ground work done.

What happened? Well, it froze. I stopped working on it for months, until recently. When I got back to this project, I came back because there was something specific I wanted to implement in it. When I took a good look at at what I had in progress, I realized that the things I was focusing on were not getting me any close to my goals. I just left everything as is. Tests were running. Some code was not used yet. No problem.

Instead of starting over from where I was, I mapped out the high level features I wanted it to achieve and wrote down a road map. It was not based on building good foundations, not based on a good architecture. It was based on what’s needed for the software to be any useful and what I felt like working on. What did it change?

  • Changes were visible on the final product
  • At every step, I would get closer to being able to use it, and find out different ways to use it
  • I got motivation to work on the project
  • The project evolved more in the last two weeks than ever before

So, why today? Well, that feature I had half started months back, I finished it today in just 30 minutes. Months of no progress to avoid 30 minutes of work. It’s not that it was long, certainly not hard. It was a boring task. It was necessary, but along it did not do any good. Today, writing it enabled a very powerful feature. Even if it was boring, I was happy to do it because I would then see the whole thing in action. It’s not quite complete yet as it still misses a few critical features required to normal use, but I can already use it for my own needs, which is great.

Now, if I had done it a few months ago, I’m pretty sure it would have been more than 30 minutes of work. It just takes me more time to do work when I’m not motivated. It’s also likely that the feature would have been more complete. Rather than doing what it has to in order to be useful, it would have been what it should be not to be so boring to write. Goldplating? Scopecreep?

The scary part is that I’m pretty certain it’s not the first time I’ve dropped a project just because I didn’t feel like doing a tiny little part.

Who reads code samples?

Recently I have been reading Programming Collective Intelligence by Toby Seragan. I love the subject. It’s all about handling large data sets and finding useful information out of it. Finally an algorithm book that covers useful algorithms. I don’t read code-centric books very often because I think they are boring, but this one has a great variety of examples that keep it interesting as the chapters advance. There are also real world examples using web services to fetch realistic data.

My only problem with the book is that there are way too many code samples. It may just be my training, but there are some situations where just writing the formula would have been a lot better. Code is good, but when there is a strong mathematical foundation to it, the formula should be provided. Unlike computer languages, mathematics as a language has been developed for hundreds of years and it provides a concise, unambiguous syntax. I like the author’s effort to write the code as a proof of concept, but I think it belongs in an appendix or on the web rather than between paragraphs.

Which one do you prefer?

def rbf(v1,v2,gamma=20):
    dv=[v1[i]-v2[i] for i in range(len(v1))]
    l=veclength(dv)
    return math.e**(-gamma*l)

or

For that kind of code, I vote #2 any time. I’m not a Python programmer. I can read it without any problem, but that vector substraction did seem a little arcane at first and it took me a few seconds to figure out, and I’m about certain that even a seasoned Python programmer would have stopped on that one. It’s not that it takes really long to figure it out, but it really keeps you away from what is really important about the function. What was important was that you want to score points that are far away from each other a lower value than those that are close by. Anyone who has done math could figure it out from the formula because it’s a common pattern. From the code, would you even bother to read it?

This is a very short code sample. In fact, it’s small enough that every single detail of it can fit into your short term memory. Here is an example that probably does not. In fact, I made your life a lot easier here because this code was scattered across 4 different pages in the book.

def euclidean(p,q):
    sumSq=0.0
    for i in range(len(p)):
        sumSq+=(p[i]-q[i])**2
    return (sumSq**0.5)

def getdistances(data,vec1):
    distancelist=[]
    for i in range(len(data)):
        vec2=data[i]['input']
        distancelist.append((euclidean(vec1,vec2),i)
    distancelist.sort()
    return distancelist

def gaussian(dist,sigma=10.0):
    exp=math.e**(-dist**2/(2*sigma**2))
    return (1/(sigma*(2*math.pi)**0.5))*exp

def weightedknn(data,vec1,k=5,weightf=gaussian):
    dlist=getdistances(data,vec1)
    avg=0.0
    totalweight=0.0

    for i in range(k):
        dist=dlist[i][0]
        idx=dlist[i][1]
        weight=weightf(dist)
        avg+=weight*data[idx]['result']
        totalweight+=weight

    avg=avg/totalweight
    return avg

or

The formula is insanely shorter, and the notation could certainly be improved. What’s the trick? It relies on well documented language features like vector operations and trims out all the python-specific code. I actually wrote more than I had to because gaussian itself is well defined in math. Because all operations used are well defined, whichever language you use will probably support them and you can use the best possible tool for your platform. The odds that I use Python when I get to use those algorithms is low, so why should I have to bother with the language specifics?

The author actually included the formula for some function in the appendix. I just think it should be the other way around.

Formal proofs may be good after all

In the past few days, I spent a lot of time working on the Cross Lingual Wiki Engine Project. I finally found the solution to change tracking across multiple languages without restricting contributions. In fact, the core of it only took a few hours to write. Then I had to spend a few more to mine information out of the table. Most of the time I spent on the project recently was to write an article on how the thing works.

My primary purpose in writing the article in the first place was to explain the article to other people working with me on the project. I made a few attempts before writing it down and it didn’t work out so well. I figured I was better off trying to structure my mind before trying to communicate, and I know of no better way to do so than by writing. Since the article is part of an academic project, I figured I should give it an academic feel (and I don’t mean to make it boring). I started writing it in LaTeX using LyX. The tools are just amazing. Using them kicks me in flow in a matter of minutes. During my writing session at the Pub, I never noticed 5 hours had passed, sun had went down and the room became crowded.

Anyway, in giving it some academic feel, I went down to basic math concepts like sets and graphs. It’s quite nice because the theory maps really well to the problem. In fact, it was probably a strong influence. In all those years thinking about the problem, I followed multiple math courses and one of them was discreet maths, so I guess it oriented me towards the solution. It turned out just mapping the architecture to those concepts allowed me to explain even more details. Mathematics are a very expressive language and it has a set of solutions to many problems if you can express them correctly.

So for pages and pages, I go on explaining how it all works in math terms. Then comes the implementation section where I actually explain how it works using real technologies. Today, after the review of a first draft, I decided to add an explanation of an additional query, which happens to be quite central. I didn’t know that before. I went on to describe it and explain in which ways it is correct. That was until I realized it was wrong. Simply not accurate. False in all possible ways. I had the query rewritten twice already because I forgot some corner conditions the two first times. I was pretty certain it was good this time. Mapping it back to theory, I realized how wrong I was.

Then I went back to my code and made an attempt to catch that newly discovered corner condition. After expanding the query significantly (it had 4-5 levels of subqueries by that time), I realized that last level of nesting was really close to the real purpose. I went on and removed some code, and then more. The final expression is so simple. All I did before was run in the wrong direction.

I can’t be certain it’s right now, but at least it fits the model. My advice of the day: when working on a though data modeling problem, try to explain it in terms an academic would understand. Write the formulas and prove them. Draw diagrams using GraphViz or restrict your diagrams to 2 primitives. Write it in LaTeX to give it all an old-school academic look. The final document will be good, but nothing of value compared to what you will learn on the way.

I still need to make a few changes based on the last review, and will probably go for a second round of review, but that article will eventually be published somewhere. Stay tuned.

History

Getting into someone else’s code ain’t easy. For some reason, the aesthetics of the code never quite feels like our preferences. The code seems dirty, naive, unstructured or plain ugly. One thing I have found over time is that, with enough exposure to it, it always ends up not being so bad. Even when you make the mistakes of rewriting it altogether, some details of your own implementation remind you how elegant minor pieces of the previous ones were.

It may be in the way an application was translated in multiple languages even if it was only ever to be used by half a dozen local users, or in the way the complete apparent lack of structure in a CMS allows different components to be really well integrated and provide a great experience to users. I find that beauty in the story of the code and its authors. In fact, I think that learning about the evolution of the piece of software, the experiences of the past authors and the overall context in which the development took place to be the only true way to appreciate working in code written by someone else.

No matter what, what we produce is only ever a reflection of what we are at that point in time. Our personal or collective stories brought us to that point. Unless you understand the motivations that drove the development, there is no way you can understand the basic qualities that were built into the system, there is just no way you can observe those qualities and esteem the code that lies before your eyes.

I tend to become really attached to the code I wrote. Seeing someone destroy my brain dumps without trying to understand them is simply irritating. As we evolve as programmers, we learn new tricks and tend to ditch our past accomplishments, but looking back, I can still respect qualities in my past work. During my first few months in the industry, my code was characterized by my own greenness. I had seen nothing of scale before, and therefore, my code was mostly original. It did not follow any common conventions, but would still bring ingenious solutions.

Later on, it moved to extremely object oriented to avoid the mess, then to astonishingly rigorous, procedural and simplistic for the extact same reasons. Data models went from simplistic and easy to understand, to extremely pure and normalized, and then back to some sort of elegant middleground. At any point in time, I could have justified my design decisions and defend them with pride, but only the story that led you there can explain those decisions. It may have been that the development project was completely exploratory and building new tools not knowing exactly where the road will end, or because we were trying to generalize the company’s processes after spending too much time maintaining code that did nearly the same thing. Each decision makes sense at the time.

These days, my priorities are focused around long term maintainability. They might be different in a few months. My current goals are driven by my current situation. In the past, my goals may have been to learn and experiment rather than serve clients not have to answer their calls forever. One thing for sure, if you try to maintain code that was built to be experimental, you will have a hard time, but you can still enjoy watching it’s evolution through revision control.

One thing for sure, all creations are only a reflection of their authors.

The same thing applies to organizations. Company cultures represent their founders and those who built it. Focus on quality and project control does not get created by itself. If a company is built by someone from the aerospace industry for whom failure is not an option, it will definitely be very rigorous. The company culture will then affect its residents until they will be ready to convince you it is the best thing to do. No one decides one morning to pick up a software estimation book and read it for fun. You get influenced to do so. Culture spreads and evolves. Creations are left all over as artifacts of time.

Books are such a great example. The software industry has so many trends. Some books are known as seminal, but if you read them today, they wouldn’t be so interesting. They would seem outdated, naive or plain bad. If you get back in the context of 1970 with information exchange technologies available, understanding of the computer science and just enough self-trained professionals, formal requirement and design processes make more sense than they do in today’s context. I never skip the preface and always take a quick look at the biography before reading a book. Otherwise, there is no way to understand the context in which it was written.

Time shows significant culture shifts, but market segments are almost more drastic. Agile methods are almost the norm today, yet some industries look at them and don’t quite understand how they fit in. Extreme Programming started on a payroll program as an internal development. No wonder customer relationship worked so great and just in time requirement elaboration worked so great: the client was next door and, seriously, can you think of a more understood program than a payroll? Would a software to control a multi-billion robotic arm controlled in free space around a hundred-billion space station make the same decisions?

When looking at traces left from the past, context is everything.

Even simple events like conferences are strongly influenced by their organizers. Between 2003 and 2005, PHP Quebec had a professional track with use cases from a business perspective. Later on, those were replaced with more process-centric sessions. These days, most of it is about security, scalability and performance. The changes are certainly not only due to market changes. If you look at CUSEC’s speaker line-ups over time, you will notice different trends.

Spend some time to look around and figure our the history of what you encounter, and think about it twice before destroying things. Find the value, then you will be able to judge if it’s worth keeping.

Code Release: PDM Works Enterprise Wrappers

As mentioned last year, rx I am releasing my wrapper classes around the PDM Works Enterprise COM object library. This library was by no means created to map the API functionality to PHP classes. Instead, viagra order it provides a minimalistic set of functions I needed to perform my tasks. Feel free to improve them to suit your own needs. The classes are released under the MIT license, rx so use at will.

To use the library, first create an instance of LPH_PDMWorks_Vault with your credentials. From there, you can obtain an instance of LPH_PDMWorks_Folder, which will allow to reach any file or folder within the vault.

Before the release, I added some documentation and removed some links to external elements. I did not fully test the classes afterward, but am fairly confident they will work. Enjoy!

I am no longer maintaining this code, so this is the first and final release. I can answer questions if any.

Code Release: OpenID Adapter for the Zend Framework

There is currently a proposal to include OpenID support to the Zend Framework. Until the implementation is finalized, you can use this very simple Zend_Auth adapter. To use it, simply follow the standard documentation and replace the adapter you would normally use with this one. The adapter relies on version 2.0.0 of the PHP OpenID Library. Enjoy.

The code is not so smart. It’s heavily based on the samples provided by the library. Feel free to improve it to suit your own needs.

PHP Interoperability & PDM Works Enterprise

One of my clients recently purchased PDM Works Enterprise in replacement for Visual Source Safe. In the pas few days, I got to familiarize myself with the tool and the objective is now to integrate it’s functionality with the existing intranet. It brought me in lands I had not visited in a long while: Microsoft technologies.

PDM Works Enterprise exposes a large COM interface for external applications to communicate with. The easy solution is to write VB applications to be called in command line. All examples provided in the documentation are either VB or VC++ and autocompletion makes accessing the COM objects very easy. Before going that route, I decided to give a try to the PHP COM extension, which reduces the amount of bridging required.

After a few minutes searching for the appropriate COM object to load, I got it running and everything worked flawlessly. I was really surprised. Finding the name of the COM object is far from easy thought (at least, for a non-Microsoft person like me). In the VB examples, EdmLib is imported and EdmVault can be instantiated. My first guess was something like this:

$vault = new COM( 'EdmLib.EdmVault' );
$vault->Login( ... );

Failure. After a few searches on Google, I came to the conclusion that no one ever used PDM Works with PHP. No big surprise there. Worst, it seemed like I was the only one who couldn’t figure where to find COM object names. How am I supposed to find them? I still don’t know, but a search in the registry brought me to this solution:

$vault = new COM( 'ConisioLib.EdmVault.1' );
$vault->Login( ... );

I could have figured out the Conisio part, which is the previous name of the product. Actually, I had a few attempts made in that direction, but the “.1” part, I had no clue.

After that point, methods can be called. Some of them return objects and they can be dealt with seamlessly. Performances are acceptable and the API documentation provided is sufficient to remove the need for auto-completion. I still built myself an abstraction layer to get better error handling and especially to get rid of the weird calls required to iterate the lists. Although the COM extension supports iterators, the library does not seem to use them.

For some reason, I was expecting that I would need to run the PHP scripts from the PDM Works server. To my great surprise, they can run from anywhere, as long as the client software is installed and a local view is set-up. I could fairly easily perform all the operations I needed.

This solves the problem of the intranet to PDM Works communication. I will still have to write some VB to handle the various hooks in the process thought. At this time, the greatest obstacle is that Microsoft won’t let me use their products. I first installed the express edition and got a product key registered. They key they provided does not work. Great. Since I had no time to waste getting it to work (because it’s probably a firewall or proxy issue), I actually attempted to purchase the standard edition. The store on the MSDN website only ships to US. Staples is out of stock and ordering is not enabled. Futureshop does not sell it anymore. Amazon.ca is out of stock. Amazon.com does not ship software in Canada. Every other online vendor I visited from the reseller page does not have the product in stock. Is it actually possible to buy Visual Studio?