At Docker's DockerCon conference in San Francisco on June 22.

Docker is hot. That much was clear from the buzz — and the news — around this year’s DockerCon. The evidence suggests that the billion-dollar company responsible for accelerating the “container revolution” is at a tipping point.

550 people attended DockerCon in 2014. This year, there were at least 2,100. Docker chief executive Ben Golub shared a few more stats during his keynote to show how much the ecosystem has grown since the last event:

  • The number of open-source contributors to Docker went from 460 to 1,300, a 183 percent surge.
  • The number of Docker projects on GitHub went from 6,500 to 40,000, up 515 percent.
  • And perhaps most impressively, the number of Docker-related job listings went from 2,500 to 43,000, an increase of 1,720 percent.

VentureBeat’s Jordan Novet has been covering Docker longer and better than almost anyone else. His coverage of DockerCon 2015 is a great starting point for learning what’s going on with this technology.

Here’s the CliffsNotes version, based on Jordan’s coverage and my conversations with various people this week: Linux containers are a technology for running multiple applications in isolated instances on a single physical server. Docker made containers far easier to implement, and in the process turned them into a tool used by many developers. With containers, you can build application in one place (say, on your laptop) and then easily move it to a server in your test environment, and from there to another server (say, in production, or in a public cloud).

Container code is open-source, lightweight, and easy to use — but it’s also missing some basic features that more complicated virtualization technologies have. For example, containers don’t have built-in networking capabilities, so they can’t easily communicate with one another. (Or, they didn’t — until this week, when Docker added native software-defined networking features to enable that.)

As developers have embraced Docker, they’ve forced companies to reckon with these missing features. And those needs are giving rise to a small but ambitious crop of startups interested in helping enterprises plug the holes.

  • WeaveWorks, for instance, offers software-defined networking and network-management tools to connect Docker containers to one another. It has raised $5 million to date.
  • Portworx provides software-defined storage for Docker containers, so they can easily connect to data sources in the cloud or in companies’ own data centers. And it plans to expand beyond storage to other spheres as well. Portworx raised $8.5 million, it announced this week.
  • Rancher Labs provides a lightweight Linux operating system designed for containers, along with management tools. It raised $10 million, it announced earlier this month.

Rancher “makes it simple to run Docker containers in production,” its tagline proclaims, and that’s a clue to where these startups are all aiming. In fact, that was the theme of DockerCon too.

Docker’s popularity is now putting pressure on IT departments to ensure that containers are well-supported in production environments.

Companies that supply enterprise technology to IT departments are feeling the pressure, too, and are beginning to respond. For example, Microsoft has aggressively embraced Docker, with container support on Windows Server since last October; a Microsoft’s Mark Russinovich gave a demonstration at DockerCon showing how he could deploy code simultaneously to a Linux container and to a Windows container. And VMware showed off its own Project Bonneville, which lets people run Docker containers inside VMware virtual machines. Since, as VMware pointed out, a container is sort of like a virtual machine, that’s not as bizarre as it might sound.

In short, this has all the hallmarks of a classic, bottom-up enterprise technology transformation. Compare it to the shift from mainframes and minicomputers to PCs, which was driven by employees bringing tools they needed into the office, and which ultimately forced IT departments to build client-server networks around the new tools. Or compare it to the shift from client-server to Internet architectures, where developers using what they’d learned building websites and Web applications gradually forced their IT departments to see the wisdom of based almost everything on HTTP and TCP/IP. Or compare it to the (still ongoing) shift to the cloud, where developers and business managers, impatient with waiting for their tech teams to implement something critical, just put down a credit card and begin using a software-as-a-service (SaaS) application instead — eventually forcing enterprise IT to accommodate and embrace the cloud, too.

Will Docker succeed in shifting the architecture of enterprise IT? It’s too soon to tell. Dozens of things could go wrong for Docker in the next couple of years. But it’s certainly on the right track, and 2015 may in retrospect look like the moment when a new type of IT infrastructure really started to take off.

 

Originally published on VentureBeat » Dylan Tweney: http://ift.tt/1SOHs08