Open Source and AI: a 2014 perspective

Peter Harrison - Feb 21 - - Dev Community

I wrote the following email in July 2014 to the New Zealand Open Source mailing list. I'm adding it here because it documents my thoughts about where we were going with AI at the time. Perhaps more importantly it discussed time frames.


Just like numerous other specialties in software there are open source artificial intelligence projects aiming to develop AI systems of various kinds.

But perhaps we need to ask ourselves where this is leading? It would be ironic if by making software free we are enabling a machine revolution.

Is this so far fetched? Have there not been predictions about AI that have not come to pass for... like... forever? Well just look at all the recent progress, both from the software and hardware areas, but also from our understanding of how the brain works. Look at how many aspects of our life are already being decided by computer.

Machines are already being used in business intelligence systems to help people make decision. How long before the humans are simply removed?

Finally - should we consider regulation to limit computer technology? Would such a concept even work given the fractured legal environment of the world? And if we cannot stop this progress what does this mean for us? Is it like the industrial revolution where there is social change, or will it endanger us as a species?
Or do you consider this as simply alarmist; that we are decades away from needing to have such concerns?

In 2000 I thought about the previous 20 years, and how far we had come with computing since the early 1980's. I thought about what we know about our ability to predict the future in terms of technology. Now obviously you can't actually see the future, but just as climatologists can tell you statistical things about the climate without being able to tell you whether it will rain next Tuesday, there are statistical things we can say about the rate of technological development purely on a historical basis.

We tend to over estimate the progress we can make in the short term, while underestimating the progress we can make in the long term. So I thought about what this means in terms of the growth of computing power, networking bandwidth and understanding of the human brain. In 2000 there was some good progress on understanding the mechanisms of the brain. Unlike with computing the real work on the brain was just starting, but it was on a growth curve.

So I made a pretty rough estimation about when we would cross the line in terms of thinking machines. The probability of reaching it in 100 years I judged virtually certain given the exponential rate of progress. The probability of reaching it in 50 years was almost as certain. In 1981 at age eleven I got my first computer. It had 1K of RAM, and had to use a reasonable portion of that for screen display. We had gone from 1K machines in 1980 to machines with something like 64MB in 2000, and there was still no sign of leveling off.

The thing about exponential curves is that the actual progress doesn't look great for most of the time. If we know that the doubling time is eighteen months we could project that we would be 'half way to intelligent machines' only eighteen months before the actual breakthrough, We would be quarter of the way only three years before the breakthrough. This is the curve computing has followed. Given the rate of development over the preceding 20 years I figured that it was virtually certain that we would reach machine intelligence in 50 years.

Okay - so how about 20 years? Now it's getting far less certain. Perhaps 50/50. We were after all on the front edge of the curve on brain investigation in 2000. What about 10 years - which would have meant a breakthrough around 2010? This I felt was getting into the region of overestimating progress in the short term. There just would not be enough time even with a exponential curve. I concluded that there was a fair chance of getting there by 2020, and an almost certainty of getting there by 2050.

We now see Google Cars, Microsoft translator, high frequency trading, business intelligence systems, facial recognition, and real time visual tracking. I think we are roughly quarter the way there - if feel like we are. And that puts us the date for machine intelligence far closer than 2045. In fact I think my 2020 estimate is still in the ballpark.

This isn't a precognition or anything - it is just trying to work it out based on the evidence. I ran through this all before even hearing about Ray Kurzweil and the Singularity, but it seems that we are following the curve as anticipated, both in the field of computing and software and in our understanding of the brain. My reasoning appears to be identical to his, only he has done far more work on it. My ideas were little more than back of the envelope calculations based on summary data I had to hand.

The irony here is that we are the 'free software' movement - what happens when software becomes sentient? Does freedom mean freedom for these machines? What was perhaps best left to science fiction writers and philosophers may very well become a practical question in the not too distant future.


At the time of writing we have just experienced the first release of SORA from OpenAI, a model which generates video from text. With the 2024 election season the threat of a system that can create a deep fakes based on a simple text prompt is all too evident. Job loses are no longer a future concern but one that is impacting people today. Perhaps the most disturbing aspect is that it is inverted from expectation, that artists and other creatives have been the first to be affected rather than the last.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .