Does Moore’s Law still hold true?

You don’t have to be a software programmer to be familiar with the principle. Since the early 1970s, Moore’s Law — named after Gordon Moore, one of the founders of Intel — has been universally touted within the computing industry. The law has many variants, but the gist of it is this: Computing power will increase exponentially, doubling every 18 to 24 months, for the foreseeable future.

Too bad it isn’t true. According to Ilkka Tuomi, a visiting scholar at the European Commission’s Joint Research Centre in Seville, Spain, not only is Moore’s Law losing significance, but it never fit the data very well in the first place. In an academic paper published last month, Tuomi dissects the many variants of Moore’s Law and shows that, in fact, none of them match up well with actual advances in chip technology. (See Tuomi’s paper for more.) For example, processor power has increased dramatically since 1965, when Moore first proposed his law, but at a slower rate than expected, doubling about every three years instead of every two. That’s equivalent to a ninefold increase in processing power per decade, compared with a 32-fold increase per decade with a two-year doubling period — a big difference.

What’s more, it’s hard to translate processor power into increased computing power, because there are so many other factors involved in computer performance. As anyone who has been forced to buy a faster, more powerful computer in order to run the latest version of Windows knows, today’s operating systems are memory and processing hogs. You probably aren’t much more productive on a top-of-the-line 2-gigahertz Pentium 4 desktop running Windows XP today than you were with a 300-megahertz Pentium II running Windows 95 five years ago. The sad fact is that the hardware upgrades of the past decade have been driven more by Microsoft operating system demands than by consumers’ demands for more power. As the old saying goes, Andy Grove giveth, and Bill Gates taketh away.

If Tuomi’s right (and I find his argument persuasive), why should we care? First, Moore’s Law gives the false impression that progress in the semiconductor industry is unlimited and unconstrained by the laws of supply and demand. Unfortunately, that just ain’t so. In reality, the cost of chip factories increases exponentially with each new generation of processors (a trend known as Moore’s Second Law). For example, Intel is spending $2 billion on its latest chip fabrication site in Kildare, Ireland. That’s a very big bet that continued demand for more processing power will eventually sell enough chips to pay for the plant. Take away the demand and you’ve got an economic crisis in the semiconductor industry. More important, Tuomi’s analysis shows that processor power alone is only part of the business of technology — and an increasingly small one at that. Look at any company’s IT infrastructure today and you’ll see that processor power is not a significant issue. There’s more than enough power available (unless you’re one of the workers unlucky enough to be saddled with a four-year-old desktop trying to run Lotus Notes R5 or Windows XP). The biggest corporate technology problems now have to do with storing, managing, organizing, retrieving, and guarding increasingly huge amounts of data.

That’s why the hottest areas for enterprise IT are in segments like storage, knowledge management, customer relationship management, business intelligence, and data mining. These systems are all about handling large amounts of information — and making it useful. Significantly, such systems often require that you spend more time reworking business processes and training employees than you devote to installing the technology itself.

“Sometimes we perhaps invest disproportionately in technology, believing that technology, as such, solves our problems,” Tuomi says. “We often underestimate efforts and investments needed for organizational change and new work practices, for example.”

The challenge now is not finding new and more powerful technologies to serve our needs — it’s organizing our companies and our work lives so that we can use those technologies more effectively. We can no longer trust in the magic wand of Moore’s Law to solve our computing problems for us. Instead, we must learn how to use the tools we already have.

This will be my last Defogger column for Business 2.0. I’ve written more than 80 of these columns since July 2000, and I hope that during that time I’ve helped you to understand and make smarter decisions about technology and its strategic uses in business. Now it’s time for me to move on. If you want to find out what I’m working on in the coming months, please sign up for my personal newsletter at So long, and thanks for all the e-mail!

Link: Does Moore’s Law still hold true?

Link broken? Try the Wayback Machine.

Does Moore’s Law still hold true?

False Alarms on the Firewall

How can you separate a legitimate security threat from routine traffic? A recently upgraded software product can help.

Computer security experts are fond of reminding people just how vulnerable their defenses really are. And for good reason: No security system, no matter how comprehensive or well-designed, can thwart every possible attack directed against it. Hackers and virus programmers are constantly coming up with new tricks, and system administrators can’t anticipate — much less prevent — each and every one of them. Witness September’s Slapper worm, which targeted the popular Apache Web server, or last month’s denial-of-service attacks on the Internet’s central domain name servers. Both came out of nowhere and did substantial damage before system administrators were able to put countermeasures in place.

To help keep their networks safe, many companies have started using intrusion detection systems, or IDSs. Cisco (CSCO) and Internet Security Systems both sell proprietary IDSs, and there’s a popular open-source version known as Snort. The software in these functions acts a bit like the alarms and security cameras in a bank: It doesn’t actually stop the crime, but it does warn you when an attack is in progress and provides a record of what happened, in order to help you catch the hacker or prevent similar attacks in the future.

That’s the theory, at any rate. The unfortunate reality is that IDSs typically generate a lot of “false positives” — like car alarms on city streets, they’re going off all the time even when there’s no real threat, which makes them more of a nuisance than a genuine deterrent. “There’s too much traffic out there that’s normal but looks suspicious to an IDS,” says Pete Lindstrom, research director for Spire Security. Your IT staff can tune the IDS to your network environment, reducing the number of false alarms, but that takes effort and time — months, in many cases.

ForeScout Technologies offers one of several responses to the problem of false positives. It sells security software called ActiveScout that works like an IDS, watching the traffic going in and out of your network for any suspicious activity. But when it detects something suspicious — for example, someone scanning your servers for open ports or requesting a username and password — the software goes active, sending out a bogus, “tagged” response. To the person doing the scanning, this looks like an ordinary reply, but if he tries to act on that information (say, by using the supplied username and password), he’ll give away his true status as an interloper. ActiveScout will immediately block that person’s access to your network, and only then will it notify your network managers.

The strategy works better than passive IDSs because most network attacks are preceded by some kind of reconnaissance. If you can correctly identify the reconnaissance, you can more effectively avert the subsequent attack.

ForeScout, which released a new version of ActiveScout this week, has about 20 corporate customers so far. One of them is Risk Management Systems, which provides risk analysis services to insurance companies and other financial institutions and has been using ActiveScout for about a year. According to Barry Choisser, the firm’s network manager, no attacks have made it past the system’s defenses during that time, despite frequent, often hourly, attempts. Nor has ActiveScout mistakenly blocked any legitimate traffic. It hasn’t required much maintenance — a boon for Choisser, with who together oversees just two people responsible for defending the company’s California headquarters as well as offices in North America, Europe, and Asia — and hasn’t needed the frequent tweaking that most IDSs (and most security tools of any type, for that matter) require to recognize and respond to the newest attacks.

ActiveScout isn’t alone in this battle. Other IDS vendors, such as IntruVert Networks, are using sophisticated analysis techniques to identify and stop network attacks more quickly and effectively. None are perfect. That’s why you still need firewalls, virus scanners, and other security measures. But these developments in IDS technologies should be welcome news for companies defending their virtual borders against an increasingly sophisticated crowd of viruses, worms, and hackers.

Link: False Alarms on the Firewall

Link broken? Try the Wayback Machine.

False Alarms on the Firewall

Still Waiting for the Web Services Miracle

They haven’t changed the world yet, but there are ways to make them work.

If you flip though the technology magazines of a year ago, you’ll likely find a lot of stories touting Web services as the next big new technology you need to know about. The promise: programming standards that would allow different applications to talk to each other over the Internet . Just as browsers connect with websites to download pages, applications could connect with one another and exchange information.

Assuming it all came together as planned, companies would be able to “rent” applications only when they needed them. Looking to display some information visually? Don’t buy a whole spreadsheet application — just connect with an online graphing component via Web services, graph your data, and then disconnect. For programmers, the dream was even more exciting: With the ability to assemble standard components from a variety of sources, all available online, building business applications would become as easy as clicking Lego pieces together.

Time for a reality check. According to a recent report by Rikki Kirzner, research director at IDC, it will be at least 10 years before companies can actually build applications out of online components in this manner. “All of that’s not doable today, or next year, or the year after,” Kirzner says. “There’s a big pitfall for those who believe that this kind of capability will exist next year.”

That’s not to say the technology is 100 percent hype. A few basic programming standards have already been established, like the simple object access protocol (SOAP), which defines the way applications can request and deliver data using extensible markup language. SOAP has already achieved wide acceptance in the past year, with SOAP-compatible software-development tools available from Borland (BORL), IBM (IBM), Microsoft (MSFT), Sun (SUNW), and many others. When I covered the topic one year ago, SOAP and similar standards were in their infancy, and IT managers were viewing Web services with interest, but also with justifiable skepticism. (See “A Common Language for the Next-Generation Internet.”)

But integrating your Java-based application with someone else’s 20-year-old Cobol program still takes time, coordination, and a team of programmers. That’s because so many different elements all have to match up. (Think about how hard it is to get software integrated within a single company, let alone integrate it with the software of other companies around the country.) Current Web services standards help, but they don’t solve the problem outright — and what’s more, they lack many of the features required by enterprise applications, such as ironclad security, the ability to guarantee the integrity of transactions, and a seamless way to exchange information in real time.

If you forget the grandiose promises, there are some things Web services are good for right now, and most of them take place not over the Internet but within a company’s intranet. That way it’s all inside the firewall, and your IT staff controls exactly what’s being connected and what’s getting exchanged. For example, Web services are helping companies tack new capabilities onto old, so-called legacy software. “Instead of replacing legacy applications, you’re now extending the life of those applications through Web services,” says Alan Boehme, executive vice president and chief information officer at Best Software.

Gartner, another research firm, identifies the corporate portals — those Web-based “dashboards” that combine information from a variety of company information systems — as one area where Web services are finding traction. To employees, the portal looks simple enough, but when they make requests or enter information (to, say, change their 401(k) preferences or view the previous quarter’s sales reports), different applications kick in to execute those commands. Behind the scenes, Web services are increasingly being used to send such commands to the various applications and to consolidate the results onscreen.

Baby steps, to be sure, but right now it’s better than nothing. And as corporate offices put the technology to work in-house, software companies are gradually upgrading their programs to speed the process along. According to Gartner, makers of enterprise software are rapidly adding Web services capabilities to their existing products, which will ultimately simplify the process of linking those products with the rest of your IT infrastructure. This charge is being led by Microsoft, with its .Net initiative; IBM, with its WebSphere product line; and BEA, with its WebLogic products.

Sun Microsystems has added Web services support to Java and to its Sun One software line, but until recently it has not played a strong role in defining and promoting Web services. However, Sun last week joined the Web Services Interoperability organization, a key consortium responsible for defining Web services standards, which may indicate that the company is taking Web services more seriously than ever.

These makers of enterprise software clearly believe in the future of Web services — and their customers are starting to pay attention. But for now, Web services are more of an evolutionary change than a true revolution in computing. It will be a long time before you can build your own enterprise applications out of components that you pick up at the software mall.

Link: Still Waiting for the Web Services Miracle

Link broken? Try the Wayback Machine.

Still Waiting for the Web Services Miracle

The Santa Slam

The holiday rush is coming, and as usual, many sites won’t be able to handle the traffic. Here’s how you can prepare for this year, and beyond.

It happens every December. The holiday season brings with it hordes of online shoppers, and — despite having months to prepare — many websites aren’t able to keep up. Homepages are slow to load, images are missing, strange error messages pop up during checkout, and sites fail to respond entirely. The not-so-jolly result: lost sales.

Keynote Systems has measured the performance of top retail websites during the holiday period for the past several years. According to its studies, the average time required to complete a purchase gets longer and longer from Thanksgiving through December. The slowest sites can take as long as 13 seconds or more to complete a transaction (note that this is the average amount of time spent, on a fast connection, waiting for the site’s servers to respond). If your site is slow in August, chances are the increased traffic at the end of the year will overwhelm your servers and make it even slower.

It doesn’t have to be this way. After all, the holidays really shouldn’t surprise anyone. But preparing for holiday traffic (or other predictable surges in the amount of visitors to your site, such as the thousands of baseball fans who overwhelmed and this week in search of World Series tickets) is about 50 percent computer science and 50 percent seat-of-the-pants management.

Mike Gilpin, research manager at Giga Information Group, says that the usual advice for website capacity planning is to look at the biggest peak in traffic your site has experienced so far and then build enough server and network capacity to handle five times that number of visitors. That would probably be a luxury for most IT departments, however (good luck convincing the bean counters that you need to buy five times as many servers as you’ve ever needed in the past). “Obviously that’s very expensive, and not everybody can do that,” Gilpin acknowledges.

A more realistic solution, if you’re concerned about how your site will hold up during the coming holiday season, is to rent extra capacity. Internet service providers can provision you with more T-1 lines, if necessary, or you could pay a Web-hosting service to supply you with extra servers. It’s temporary, but it can get you through the next few months.

A more long-term solution — admittedly, one you aren’t likely to get to before December — is to go through your site, page by page, application by application, and make sure it’s put together as efficiently as possible. Most commercial websites are on their third or fourth versions, so the quick-fix problems have probably been rectified already. Now, says Willy Chiu, vice president of IBM’s high-volume website team in San Jose, the biggest problem is coordinating the various technology and business teams. Websites have become increasingly complex, with multiple tiers of infrastructure: Web servers (to deliver HTML and graphics to customers’ Web browsers), application servers (to assemble webpages from various elements), databases, the data center’s network, and a connection to the Internet via an ISP. (See Business 2.0’s “E-Business Parts List” for a more detailed explanation.) Web performance problems could happen anywhere along this chain, especially if the various parts aren’t coordinated.

To help keep that from happening, IBM’s high-volume website team and Giga have a few recommendations:

1. First, when designing webpages, make sure they’re not so complex and eye-catching that they take forever to load. You need a budget for every page, spelling out the business value of each element (buttons, graphics, scripts, and the like), and you need to be sure that they’re not only necessary but worth the time they take to load in a customer’s browser.

2. A good rule of thumb is that you should try to keep each page under 64 kilobytes, with no more than 20 different items. Total time to download a page should be less than 20 seconds, or less than 8 seconds on a fast connection. (Business 2.0’s homepage, for the record, totals 110KB with a whopping 60 items, but it loads in about 7 seconds on a fast connection — not bad, though there’s room for improvement.)

3. Next, test your website in the environment where it’s going to be used. If the majority of your site’s visitors are running Internet Explorer 5 on Windows 98 systems and have dialup connections to the Internet, that’s what you should use to test the site. Too often, sites are evaluated using the latest and greatest hardware, plugged into a company’s lightning-quick Internet connection, which makes them seem faster than they will appear to customers.

4. Use caching or content-delivery networks to improve the speed at which images are downloaded. Such systems, made by the likes of Akamai, distribute copies of frequently used elements, such as graphics, to fast servers that are close to the end users, so they can be loaded faster. You can also boost your site’s performance by reusing images (logos, for example) throughout the site, so that the customers’ Web browsers can access the same files from the browser cache without having to load them every single time.

5. Build your infrastructure with growth in mind. For example, consider using servers with new “blade” architectures, which let you expand storage or processing capacity by plugging in special cards known as blades. You need new, blade-capable hardware for this to work (Hewlett-Packard, Compaq, and Dell have led this market so far), but the advantage is that you can add power to your servers without taking up additional space in the data center.

6. Finally, realize that even if your site is running like a well-tuned dragster, external services, such as credit card processors, fulfillment services, application service providers, and ISPs, can still slow you down. And a frustrated customer doesn’t care that it’s not your fault — those services are transparent, so you’ll take the blame. To prevent this, you need to do due diligence on all your service providers, making sure they can rapidly process each online transaction. If necessary, sign contracts with two or more such providers so you have a backup in case one is slow or goes offline entirely.

Traffic surges — most of them, anyway — are predictable. And with a little careful planning, you can be ready when the next one comes.

Link: The Santa Slam

Link broken? Try the Wayback Machine.

The Santa Slam

The Death of the $1 Million Software Package

Prices for big corporate systems have come back down from the stratosphere, but that doesn’t mean you need to buy.

Back in the late 1990s, a software salesman could look you in the eye and say with a straight face that his company’s enterprise system would cost you $1 million. Mercifully, those days are over. According to a survey released this week by research firm Yankee Group, the number of seven-figure deals for enterprise resource planning (ERP) and supply-chain management (SCM) software dropped by 62 percent between the fourth quarter of 2000 and the second quarter of 2002. That’s bad news for the vendors — shed no tears for them — but good news if you’re looking to make a major purchase, as you can now get the technology you need at a more reasonable price.

Here’s more good news: The average enterprise IT budget has finally bottomed out and is actually going up — an increase of 3.7 percent is expected in the next 6 to 12 months, according to tech research firm Aberdeen Group. That’s certainly more modest than the 10 to 15 percent growth rates of the 1990s, but it beats the declining IT budgets of the past year or two.

Of course some companies are still paying off the big tech purchases they made during the past few years. The result: a new conservatism among IT buyers. “The market is dictated much more by market fundamentals now,” says Hugh Bishop, a senior vice president at Aberdeen. “A lot fewer organizations are willing to put in place a brand-new application just because the competition is doing it.” Yankee senior analyst Mike Dominy agrees, pointing out that with the passing of the Y2K and dotcom threats, “companies no longer have a burning reason to upgrade or replace existing applications.” Besides, with a sagging market for many companies’ products, increasing productivity is no longer a valid selling point. Why would you want to produce more cars or bars of soap if you can’t sell what you’ve already made? “The CIO’s job now is to understand accounting laws and how to comply with GAAP (generally accepted accounting principles),” says Alan Boehme, chief information officer of Best Software.

Many IT managers are opting instead simply to upgrade what they already have in place. It’s the technology equivalent of putting more water in the soup. Already installed a sales-force application system? Consider extending that application to handheld computers or mobile phones. The same goes for hardware and network infrastructure. If you have hard disc storage on your servers that’s going unused (a common problem for many companies), look at storage-area network technologies and storage-management systems that can help you better work with the capacity you already have, deferring the day when you’ll have to buy more.

Application integration tools are especially relevant in this environment. “Companies are looking around and saying, ‘OK, I bought all this stuff, how do I make it work together?'” says Yankee’s Dominy. If companies are still buying from ERP and SCM vendors, they’re more likely to purchase smaller applications that have a clear, quick return on investment, such as software for managing a fleet of delivery vehicles, rather than full-blown, end-to-end systems. “Money is going into IT administration and management (including data center integration) and application integration,” agrees George Zachary, a general partner at venture capital firm Mohr Davidow. “Money is going very slowly into business-process-oriented IT (such as CRM).”

Another trend in your favor is that companies are increasingly able to reduce their up-front costs by buying technology on a subscription model. If you deploy a software package widely and use it for a long time, you still might end up spending $1 million — but that elephant is easier to swallow one bite at a time, rather than all at once. That explains the continuing appeal of low-end application service providers like, which charges just $87 per employee per month. But it’s not just ASPs and outsourcers that put forth such deals; traditional software vendors are now more likely to offer lease options for their enterprise products. “There’s more room to bargain,” says Aberdeen’s Bishop.

Lower prices, bigger budgets, more flexible financing options — add it all up and your technology staff should have plenty to smile about these days.

Link: The Death of the $1 Million Software Package

Link broken? Try the Wayback Machine.

The Death of the $1 Million Software Package