Category Archives: Business 2.0

Does Moore’s Law still hold true?

You don’t have to be a software programmer to be familiar with the principle. Since the early 1970s, Moore’s Law — named after Gordon Moore, one of the founders of Intel — has been universally touted within the computing industry. The law has many variants, but the gist of it is this: Computing power will increase exponentially, doubling every 18 to 24 months, for the foreseeable future.

Too bad it isn’t true. According to Ilkka Tuomi, a visiting scholar at the European Commission’s Joint Research Centre in Seville, Spain, not only is Moore’s Law losing significance, but it never fit the data very well in the first place. In an academic paper published last month, Tuomi dissects the many variants of Moore’s Law and shows that, in fact, none of them match up well with actual advances in chip technology. (See Tuomi’s paper for more.) For example, processor power has increased dramatically since 1965, when Moore first proposed his law, but at a slower rate than expected, doubling about every three years instead of every two. That’s equivalent to a ninefold increase in processing power per decade, compared with a 32-fold increase per decade with a two-year doubling period — a big difference.

What’s more, it’s hard to translate processor power into increased computing power, because there are so many other factors involved in computer performance. As anyone who has been forced to buy a faster, more powerful computer in order to run the latest version of Windows knows, today’s operating systems are memory and processing hogs. You probably aren’t much more productive on a top-of-the-line 2-gigahertz Pentium 4 desktop running Windows XP today than you were with a 300-megahertz Pentium II running Windows 95 five years ago. The sad fact is that the hardware upgrades of the past decade have been driven more by Microsoft operating system demands than by consumers’ demands for more power. As the old saying goes, Andy Grove giveth, and Bill Gates taketh away.

If Tuomi’s right (and I find his argument persuasive), why should we care? First, Moore’s Law gives the false impression that progress in the semiconductor industry is unlimited and unconstrained by the laws of supply and demand. Unfortunately, that just ain’t so. In reality, the cost of chip factories increases exponentially with each new generation of processors (a trend known as Moore’s Second Law). For example, Intel is spending $2 billion on its latest chip fabrication site in Kildare, Ireland. That’s a very big bet that continued demand for more processing power will eventually sell enough chips to pay for the plant. Take away the demand and you’ve got an economic crisis in the semiconductor industry. More important, Tuomi’s analysis shows that processor power alone is only part of the business of technology — and an increasingly small one at that. Look at any company’s IT infrastructure today and you’ll see that processor power is not a significant issue. There’s more than enough power available (unless you’re one of the workers unlucky enough to be saddled with a four-year-old desktop trying to run Lotus Notes R5 or Windows XP). The biggest corporate technology problems now have to do with storing, managing, organizing, retrieving, and guarding increasingly huge amounts of data.

That’s why the hottest areas for enterprise IT are in segments like storage, knowledge management, customer relationship management, business intelligence, and data mining. These systems are all about handling large amounts of information — and making it useful. Significantly, such systems often require that you spend more time reworking business processes and training employees than you devote to installing the technology itself.

“Sometimes we perhaps invest disproportionately in technology, believing that technology, as such, solves our problems,” Tuomi says. “We often underestimate efforts and investments needed for organizational change and new work practices, for example.”

The challenge now is not finding new and more powerful technologies to serve our needs — it’s organizing our companies and our work lives so that we can use those technologies more effectively. We can no longer trust in the magic wand of Moore’s Law to solve our computing problems for us. Instead, we must learn how to use the tools we already have.

This will be my last Defogger column for Business 2.0. I’ve written more than 80 of these columns since July 2000, and I hope that during that time I’ve helped you to understand and make smarter decisions about technology and its strategic uses in business. Now it’s time for me to move on. If you want to find out what I’m working on in the coming months, please sign up for my personal newsletter at http://dylan.tweney.com. So long, and thanks for all the e-mail!

Link: Does Moore’s Law still hold true?

Link broken? Try the Wayback Machine.

False Alarms on the Firewall

How can you separate a legitimate security threat from routine traffic? A recently upgraded software product can help.


Computer security experts are fond of reminding people just how vulnerable their defenses really are. And for good reason: No security system, no matter how comprehensive or well-designed, can thwart every possible attack directed against it. Hackers and virus programmers are constantly coming up with new tricks, and system administrators can’t anticipate — much less prevent — each and every one of them. Witness September’s Slapper worm, which targeted the popular Apache Web server, or last month’s denial-of-service attacks on the Internet’s central domain name servers. Both came out of nowhere and did substantial damage before system administrators were able to put countermeasures in place.

To help keep their networks safe, many companies have started using intrusion detection systems, or IDSs. Cisco (CSCO) and Internet Security Systems both sell proprietary IDSs, and there’s a popular open-source version known as Snort. The software in these functions acts a bit like the alarms and security cameras in a bank: It doesn’t actually stop the crime, but it does warn you when an attack is in progress and provides a record of what happened, in order to help you catch the hacker or prevent similar attacks in the future.

That’s the theory, at any rate. The unfortunate reality is that IDSs typically generate a lot of “false positives” — like car alarms on city streets, they’re going off all the time even when there’s no real threat, which makes them more of a nuisance than a genuine deterrent. “There’s too much traffic out there that’s normal but looks suspicious to an IDS,” says Pete Lindstrom, research director for Spire Security. Your IT staff can tune the IDS to your network environment, reducing the number of false alarms, but that takes effort and time — months, in many cases.

ForeScout Technologies offers one of several responses to the problem of false positives. It sells security software called ActiveScout that works like an IDS, watching the traffic going in and out of your network for any suspicious activity. But when it detects something suspicious — for example, someone scanning your servers for open ports or requesting a username and password — the software goes active, sending out a bogus, “tagged” response. To the person doing the scanning, this looks like an ordinary reply, but if he tries to act on that information (say, by using the supplied username and password), he’ll give away his true status as an interloper. ActiveScout will immediately block that person’s access to your network, and only then will it notify your network managers.

The strategy works better than passive IDSs because most network attacks are preceded by some kind of reconnaissance. If you can correctly identify the reconnaissance, you can more effectively avert the subsequent attack.

ForeScout, which released a new version of ActiveScout this week, has about 20 corporate customers so far. One of them is Risk Management Systems, which provides risk analysis services to insurance companies and other financial institutions and has been using ActiveScout for about a year. According to Barry Choisser, the firm’s network manager, no attacks have made it past the system’s defenses during that time, despite frequent, often hourly, attempts. Nor has ActiveScout mistakenly blocked any legitimate traffic. It hasn’t required much maintenance — a boon for Choisser, with who together oversees just two people responsible for defending the company’s California headquarters as well as offices in North America, Europe, and Asia — and hasn’t needed the frequent tweaking that most IDSs (and most security tools of any type, for that matter) require to recognize and respond to the newest attacks.

ActiveScout isn’t alone in this battle. Other IDS vendors, such as IntruVert Networks, are using sophisticated analysis techniques to identify and stop network attacks more quickly and effectively. None are perfect. That’s why you still need firewalls, virus scanners, and other security measures. But these developments in IDS technologies should be welcome news for companies defending their virtual borders against an increasingly sophisticated crowd of viruses, worms, and hackers.

Link: False Alarms on the Firewall

Link broken? Try the Wayback Machine.

Still Waiting for the Web Services Miracle

They haven’t changed the world yet, but there are ways to make them work.


If you flip though the technology magazines of a year ago, you’ll likely find a lot of stories touting Web services as the next big new technology you need to know about. The promise: programming standards that would allow different applications to talk to each other over the Internet . Just as browsers connect with websites to download pages, applications could connect with one another and exchange information.

Assuming it all came together as planned, companies would be able to “rent” applications only when they needed them. Looking to display some information visually? Don’t buy a whole spreadsheet application — just connect with an online graphing component via Web services, graph your data, and then disconnect. For programmers, the dream was even more exciting: With the ability to assemble standard components from a variety of sources, all available online, building business applications would become as easy as clicking Lego pieces together.

Time for a reality check. According to a recent report by Rikki Kirzner, research director at IDC, it will be at least 10 years before companies can actually build applications out of online components in this manner. “All of that’s not doable today, or next year, or the year after,” Kirzner says. “There’s a big pitfall for those who believe that this kind of capability will exist next year.”

That’s not to say the technology is 100 percent hype. A few basic programming standards have already been established, like the simple object access protocol (SOAP), which defines the way applications can request and deliver data using extensible markup language. SOAP has already achieved wide acceptance in the past year, with SOAP-compatible software-development tools available from Borland (BORL), IBM (IBM), Microsoft (MSFT), Sun (SUNW), and many others. When I covered the topic one year ago, SOAP and similar standards were in their infancy, and IT managers were viewing Web services with interest, but also with justifiable skepticism. (See “A Common Language for the Next-Generation Internet.”)

But integrating your Java-based application with someone else’s 20-year-old Cobol program still takes time, coordination, and a team of programmers. That’s because so many different elements all have to match up. (Think about how hard it is to get software integrated within a single company, let alone integrate it with the software of other companies around the country.) Current Web services standards help, but they don’t solve the problem outright — and what’s more, they lack many of the features required by enterprise applications, such as ironclad security, the ability to guarantee the integrity of transactions, and a seamless way to exchange information in real time.

If you forget the grandiose promises, there are some things Web services are good for right now, and most of them take place not over the Internet but within a company’s intranet. That way it’s all inside the firewall, and your IT staff controls exactly what’s being connected and what’s getting exchanged. For example, Web services are helping companies tack new capabilities onto old, so-called legacy software. “Instead of replacing legacy applications, you’re now extending the life of those applications through Web services,” says Alan Boehme, executive vice president and chief information officer at Best Software.

Gartner, another research firm, identifies the corporate portals — those Web-based “dashboards” that combine information from a variety of company information systems — as one area where Web services are finding traction. To employees, the portal looks simple enough, but when they make requests or enter information (to, say, change their 401(k) preferences or view the previous quarter’s sales reports), different applications kick in to execute those commands. Behind the scenes, Web services are increasingly being used to send such commands to the various applications and to consolidate the results onscreen.

Baby steps, to be sure, but right now it’s better than nothing. And as corporate offices put the technology to work in-house, software companies are gradually upgrading their programs to speed the process along. According to Gartner, makers of enterprise software are rapidly adding Web services capabilities to their existing products, which will ultimately simplify the process of linking those products with the rest of your IT infrastructure. This charge is being led by Microsoft, with its .Net initiative; IBM, with its WebSphere product line; and BEA, with its WebLogic products.

Sun Microsystems has added Web services support to Java and to its Sun One software line, but until recently it has not played a strong role in defining and promoting Web services. However, Sun last week joined the Web Services Interoperability organization, a key consortium responsible for defining Web services standards, which may indicate that the company is taking Web services more seriously than ever.

These makers of enterprise software clearly believe in the future of Web services — and their customers are starting to pay attention. But for now, Web services are more of an evolutionary change than a true revolution in computing. It will be a long time before you can build your own enterprise applications out of components that you pick up at the software mall.

Link: Still Waiting for the Web Services Miracle

Link broken? Try the Wayback Machine.

The Santa Slam

The holiday rush is coming, and as usual, many sites won’t be able to handle the traffic. Here’s how you can prepare for this year, and beyond.


It happens every December. The holiday season brings with it hordes of online shoppers, and — despite having months to prepare — many websites aren’t able to keep up. Homepages are slow to load, images are missing, strange error messages pop up during checkout, and sites fail to respond entirely. The not-so-jolly result: lost sales.

Keynote Systems has measured the performance of top retail websites during the holiday period for the past several years. According to its studies, the average time required to complete a purchase gets longer and longer from Thanksgiving through December. The slowest sites can take as long as 13 seconds or more to complete a transaction (note that this is the average amount of time spent, on a fast connection, waiting for the site’s servers to respond). If your site is slow in August, chances are the increased traffic at the end of the year will overwhelm your servers and make it even slower.

It doesn’t have to be this way. After all, the holidays really shouldn’t surprise anyone. But preparing for holiday traffic (or other predictable surges in the amount of visitors to your site, such as the thousands of baseball fans who overwhelmed MLB.com and Tickets.com this week in search of World Series tickets) is about 50 percent computer science and 50 percent seat-of-the-pants management.

Mike Gilpin, research manager at Giga Information Group, says that the usual advice for website capacity planning is to look at the biggest peak in traffic your site has experienced so far and then build enough server and network capacity to handle five times that number of visitors. That would probably be a luxury for most IT departments, however (good luck convincing the bean counters that you need to buy five times as many servers as you’ve ever needed in the past). “Obviously that’s very expensive, and not everybody can do that,” Gilpin acknowledges.

A more realistic solution, if you’re concerned about how your site will hold up during the coming holiday season, is to rent extra capacity. Internet service providers can provision you with more T-1 lines, if necessary, or you could pay a Web-hosting service to supply you with extra servers. It’s temporary, but it can get you through the next few months.

A more long-term solution — admittedly, one you aren’t likely to get to before December — is to go through your site, page by page, application by application, and make sure it’s put together as efficiently as possible. Most commercial websites are on their third or fourth versions, so the quick-fix problems have probably been rectified already. Now, says Willy Chiu, vice president of IBM’s high-volume website team in San Jose, the biggest problem is coordinating the various technology and business teams. Websites have become increasingly complex, with multiple tiers of infrastructure: Web servers (to deliver HTML and graphics to customers’ Web browsers), application servers (to assemble webpages from various elements), databases, the data center’s network, and a connection to the Internet via an ISP. (See Business 2.0′s “E-Business Parts List” for a more detailed explanation.) Web performance problems could happen anywhere along this chain, especially if the various parts aren’t coordinated.

To help keep that from happening, IBM’s high-volume website team and Giga have a few recommendations:

1. First, when designing webpages, make sure they’re not so complex and eye-catching that they take forever to load. You need a budget for every page, spelling out the business value of each element (buttons, graphics, scripts, and the like), and you need to be sure that they’re not only necessary but worth the time they take to load in a customer’s browser.

2. A good rule of thumb is that you should try to keep each page under 64 kilobytes, with no more than 20 different items. Total time to download a page should be less than 20 seconds, or less than 8 seconds on a fast connection. (Business 2.0′s homepage, for the record, totals 110KB with a whopping 60 items, but it loads in about 7 seconds on a fast connection — not bad, though there’s room for improvement.)

3. Next, test your website in the environment where it’s going to be used. If the majority of your site’s visitors are running Internet Explorer 5 on Windows 98 systems and have dialup connections to the Internet, that’s what you should use to test the site. Too often, sites are evaluated using the latest and greatest hardware, plugged into a company’s lightning-quick Internet connection, which makes them seem faster than they will appear to customers.

4. Use caching or content-delivery networks to improve the speed at which images are downloaded. Such systems, made by the likes of Akamai, distribute copies of frequently used elements, such as graphics, to fast servers that are close to the end users, so they can be loaded faster. You can also boost your site’s performance by reusing images (logos, for example) throughout the site, so that the customers’ Web browsers can access the same files from the browser cache without having to load them every single time.

5. Build your infrastructure with growth in mind. For example, consider using servers with new “blade” architectures, which let you expand storage or processing capacity by plugging in special cards known as blades. You need new, blade-capable hardware for this to work (Hewlett-Packard, Compaq, and Dell have led this market so far), but the advantage is that you can add power to your servers without taking up additional space in the data center.

6. Finally, realize that even if your site is running like a well-tuned dragster, external services, such as credit card processors, fulfillment services, application service providers, and ISPs, can still slow you down. And a frustrated customer doesn’t care that it’s not your fault — those services are transparent, so you’ll take the blame. To prevent this, you need to do due diligence on all your service providers, making sure they can rapidly process each online transaction. If necessary, sign contracts with two or more such providers so you have a backup in case one is slow or goes offline entirely.

Traffic surges — most of them, anyway — are predictable. And with a little careful planning, you can be ready when the next one comes.

Link: The Santa Slam

Link broken? Try the Wayback Machine.

The Death of the $1 Million Software Package

Prices for big corporate systems have come back down from the stratosphere, but that doesn’t mean you need to buy.


Back in the late 1990s, a software salesman could look you in the eye and say with a straight face that his company’s enterprise system would cost you $1 million. Mercifully, those days are over. According to a survey released this week by research firm Yankee Group, the number of seven-figure deals for enterprise resource planning (ERP) and supply-chain management (SCM) software dropped by 62 percent between the fourth quarter of 2000 and the second quarter of 2002. That’s bad news for the vendors — shed no tears for them — but good news if you’re looking to make a major purchase, as you can now get the technology you need at a more reasonable price.

Here’s more good news: The average enterprise IT budget has finally bottomed out and is actually going up — an increase of 3.7 percent is expected in the next 6 to 12 months, according to tech research firm Aberdeen Group. That’s certainly more modest than the 10 to 15 percent growth rates of the 1990s, but it beats the declining IT budgets of the past year or two.

Of course some companies are still paying off the big tech purchases they made during the past few years. The result: a new conservatism among IT buyers. “The market is dictated much more by market fundamentals now,” says Hugh Bishop, a senior vice president at Aberdeen. “A lot fewer organizations are willing to put in place a brand-new application just because the competition is doing it.” Yankee senior analyst Mike Dominy agrees, pointing out that with the passing of the Y2K and dotcom threats, “companies no longer have a burning reason to upgrade or replace existing applications.” Besides, with a sagging market for many companies’ products, increasing productivity is no longer a valid selling point. Why would you want to produce more cars or bars of soap if you can’t sell what you’ve already made? “The CIO’s job now is to understand accounting laws and how to comply with GAAP (generally accepted accounting principles),” says Alan Boehme, chief information officer of Best Software.

Many IT managers are opting instead simply to upgrade what they already have in place. It’s the technology equivalent of putting more water in the soup. Already installed a sales-force application system? Consider extending that application to handheld computers or mobile phones. The same goes for hardware and network infrastructure. If you have hard disc storage on your servers that’s going unused (a common problem for many companies), look at storage-area network technologies and storage-management systems that can help you better work with the capacity you already have, deferring the day when you’ll have to buy more.

Application integration tools are especially relevant in this environment. “Companies are looking around and saying, ‘OK, I bought all this stuff, how do I make it work together?’” says Yankee’s Dominy. If companies are still buying from ERP and SCM vendors, they’re more likely to purchase smaller applications that have a clear, quick return on investment, such as software for managing a fleet of delivery vehicles, rather than full-blown, end-to-end systems. “Money is going into IT administration and management (including data center integration) and application integration,” agrees George Zachary, a general partner at venture capital firm Mohr Davidow. “Money is going very slowly into business-process-oriented IT (such as CRM).”

Another trend in your favor is that companies are increasingly able to reduce their up-front costs by buying technology on a subscription model. If you deploy a software package widely and use it for a long time, you still might end up spending $1 million — but that elephant is easier to swallow one bite at a time, rather than all at once. That explains the continuing appeal of low-end application service providers like Salesforce.com, which charges just $87 per employee per month. But it’s not just ASPs and outsourcers that put forth such deals; traditional software vendors are now more likely to offer lease options for their enterprise products. “There’s more room to bargain,” says Aberdeen’s Bishop.

Lower prices, bigger budgets, more flexible financing options — add it all up and your technology staff should have plenty to smile about these days.

Link: The Death of the $1 Million Software Package

Link broken? Try the Wayback Machine.

Are You Overpaying for Content Management?

Companies are spending hundreds of thousands of dollars on software to manage their websites and other documents — and getting dubious returns. There’s got to be a better way.


The numbers aren’t pretty. According to a Jupiter survey of chief information officers at companies with more than $50 million in revenue, 53 percent will have deployed new content-management systems by the end of this year. Given the expense involved — a high-end CMS like Vignette can cost upwards of $100,000 for the software alone, plus another multiple of that in installation costs — you’d think these companies would be choosing wisely and getting exactly what they need. Unfortunately, that’s not always the case.

Content-management systems help companies produce and maintain all their textual information: marketing copy, help files, customer service FAQs, and the like. The information can appear on websites, but it doesn’t have to — if your company uses manuals, books, press releases, or brochures, you can probably benefit from the kind of functions a CMS will offer (an obvious example would be, say, the ability to alter the copyright notice at the bottom of every page of a website or book with just a few keystrokes).

Vendors such as Vignette, Documentum, and Interwoven came to the forefront during the Internet boom because websites often required a large number of people to manage more information than ever, frequently posting new content to a big site daily or even hourly. With a CMS, the people who write copy can update the site easily using simple, Web-based forms, without having to muck around in HTML, while others take care of the site’s appearance, organization, and underlying code.

That sounds great on paper (no pun intended), but in the real world, all the expense and effort involved in deploying such a system often results in disappointment, says Jupiter Research senior analyst Matthew Berk. Companies pick the wrong software for their needs or underestimate how much time and money the deployment will take. “Nothing works out of the box,” Berk says, “period.” He adds that CMS deployment costs are usually two to six times the software’s purchase price.

As a result, he predicts that many companies will abandon high-end CMS vendors and start building cheaper, homegrown versions based on open-source software, such as Cocoon, Mason, and Zope. However, that’s a less-than-ideal approach as well. “It takes an organization a long time to duplicate the bells and whistles that are available in an off-the-shelf product,” says Ann Rockley, president of Toronto-based content management consultancy the Rockley Group.

What’s the solution? Actually, there are a few. First off — and this may seem like common sense — you should be clear at the beginning about what you need, and approach the purchase from a business perspective. “Organizations have a tendency to buy the tech first, and then figure out what it is they’re going to do with it,” Rockley says. Instead, she advises, determine exactly which employees are going to produce content, what the existing work flow is, what kinds of documents they’re creating, and where the content will be used. Bill Rogers, CEO of CMS vendor Ektron, agrees. “You need to sit down and figure out how you manage content today, who should be in what work groups, and how you manage the approval process,” he says.

Second, include all types of content in your analysis, not just text on your website. The more you can reuse common elements (product descriptions, for instance) — instead of having to rewrite them each time — the more efficient you can be with the company’s time and money. The savings get especially dramatic when you need to have copy translated into other languages, Rockley says, because a CMS can save you from having identical text translated more than once.

Third, if you’re building a homegrown CMS, tread with care. A better approach may be to use low-end commercial CMS software, which can give you a basic framework for as little as a few thousand dollars. The high-end systems will handle large volumes of content better and often have more sophisticated options, but these features may be overkill for many companies. Berk estimates that there are more than 200 CMS vendors in the market now, including Ektron, Atomz, Percussion, and Red Dot.

Fourth, make sure that business managers, not IT, are in control of the CMS project. It’s the business side that understands how content is produced and what kinds of processes really work — aspects that are critical to a successful deployment. “If you have an IT department that’s simply automating bad business processes, you can see why it gets very costly,” says Scott Abel, a content-management strategist for Nims Associates, an IT consulting firm in Indiana. Business managers can also stick up for the needs of the nontechnical staff who will actually be using the system to create content. “Usability for the nontechnical users is critical,” Jupiter’s Berk says. In other words, you need to keep it simple for the sake of the editorial staff.

Finally, do a dry run before you build out an entire content-management system. Many vendors will provide a low- or no-cost evaluation version of their software for pilot projects, Rockley says. That can give you invaluable feedback on whether the software (and the business processes you’ve set up) is going to work properly. After all, if you’re going to drop a few hundred thousand dollars on a critical piece of your business infrastructure, you should be 100 percent certain it’s what you want.

Link: Are You Overpaying for Content Management?

Link broken? Try the Wayback Machine.

Your Company’s Biggest Data Risk? It Might Just Be the Employees.

Most companies are diligent about backing up their servers and mainframes. But how much vital information are you leaving exposed on laptops and desktop PCs?


Computer viruses. Laptop thieves. Fires. Hurricanes. All have the potential to erase, damage, or destroy valuable corporate information. Too bad most companies don’t have a decent backup plan in place to recover this information when it gets lost.

Sure, corporate IT departments are religious about backing up data on servers and mainframes. If one of these mission-critical machines goes down, backups will ensure that your business stays up and running, with minimal data loss. But there’s a big blind spot in most IT departments’ backup strategies: the hard drive sitting on (or under) your desk.

Stop and think about how much of your company’s business information is tucked away in your desktop PC or laptop right now. Maybe it’s the draft of a proposal, a critical customer’s phone number, an e-mail message, or a spreadsheet. When was the last time you backed up that data? How about the guy in the cube next door, or the sales team, with their laptops?

If you want a worst-case scenario, just talk to David Johnson, the director of technology for Grant Thornton, an international accounting firm. Grant Thornton has about 3,000 employees in 51 different offices, and more than 70 percent of them routinely work on laptops at clients’ offices. A couple of years ago, some employees were wrapping up a big job. They shipped all of their computers and equipment back to the corporate office, but when they got back they discovered that one of the boxes didn’t make it. The shipping company had lost the box, and of course, that was the one containing the computer that had all of the client’s data on it. “Think of all the embarrassment, of having to explain to the client that we had to redo all that work,” says Johnson. (The box turned up a couple of months later — after Grant Thornton had already redone the work.)

It’s a widespread problem. According to a recent IDC report, more than 300 million business PCs have a combined 109,000 terabytes of data that is not backed up regularly — about half of all the data on corporate PCs and laptops. That’s 10 times the quantity of information in the entire Library of Congress.

Part of the problem is the backup policies at many companies, which essentially say to employees, “Here’s some space on the network for you to use. Please back up your important files to the network whenever you can.” The people most likely to follow such policies are those with the most time on their hands — in other words, your least productive employees. The others, the ones who are really accomplishing something and who are likely to have the most valuable data on their PCs, often ignore such policies or follow them sporadically at best.

Grant Thornton now uses a backup system from Connected Corp., that automatically backs up laptops and PCs whenever they’re connected to the network. The software also enables employees to recover data by themselves, quickly and easily, whenever they discover something’s been lost. That’s in line with IDC senior research analyst Fred Broussard’s recommendations. “Backup strategies need to take into account disaster recovery, and the fact that IT and end users can forget that they need to back up,” Broussard says.

Within 30 days of deploying the Connected system, Johnson says, almost 80 percent of employees’ hard drives were backed up; it now covers virtually everyone (the backups contain about 8 terabytes of server data). The software’s recovery feature is used as often as 300 times per month — and it was particularly important last year, when the 9/11 attacks took the company’s lower Manhattan office offline. Although no data from that office was lost, employees had to evacuate the office, leaving everything behind. Fortunately, a backup data center was online within six minutes, Johnson says, and the company managed its data remotely for several months while waiting for the Manhattan office to reopen.

Connected’s software costs $100 per person if you run it yourself, or $150 per person per year if Connected runs it for you as a managed service. Other vendors of desktop PC backup software and services include Storactive, NovaStor, Dantz and enterprise backup heavyweight Veritas. Whichever solution your company chooses, be sure that it backs up data automatically, without requiring employees’ attention, and that it makes data-recovery simple for employees to do themselves. Don’t leave their data twisting in the wind.

Link: Your Company’s Biggest Data Risk? It Might Just Be the Employees.

Link broken? Try the Wayback Machine.

Blogging for Dollars

Businesses are starting to use weblogs — those impromptu lists-cum-journals — as powerful tools for knowledge management and communications.


Businesspeople might be forgiven for rolling their eyeballs when the word “weblog” is mentioned. After all, most media coverage to date has focused on weblogs (a.k.a. “blogs”) as public diaries — idiosyncratic, personal, and not especially relevant to anyone outside the blogger’s circle of friends. But what the coverage has missed so far is that blogs are also powerful knowledge management tools. Two new business blogging products from Trellix and Traction Software show how that might work.

Blogging is attractive as a vehicle for personal expression because it’s an easy way to capture, comment on, and keep abreast of interesting tidbits of information. The same characteristic makes blogging well-suited to businesses that want to track information about products and markets, or distribute information to employees and customers. You see something interesting on the Web, and within seconds you can put a link to it on your weblog, add some comments, and be on to something else. Naturally, other bloggers are doing the same thing. Over time, your own blog and the other blogs you spend time reading develop into a big, interconnected web of information. It’s like a quick-and-dirty, easy-to-use knowledge management system.

Business blogs are more likely to be focused on projects or teams than on their individual creators. For instance, a marketing team at Verizon uses Traction software to track market conditions and competitive intelligence. Members of a product development team might use a private weblog to which they contribute notes and ideas regarding the development process. Customer service reps could contribute problem fixes and customer notes to a collaborative weblog and refer to it later — kind of like a continuously evolving user manual. For personal blogs, as Traction CEO Tim Simonson puts it, “it’s all about me — it’s a personal publishing orientation. In business, on the other hand, the orientation is about the subject, or the product, or the business issue.”

That doesn’t mean business blogs should be bland and corporate in tone. In fact, especially for customer-oriented weblogs, it’s better if they aren’t. Trellix co-founder Dan Bricklin suggests that personality might be particularly relevant to small businesses. “For a lot of small businesses, the way they survive is because of personal service,” says Bricklin. “There’s an actual person behind the counter who you can get to know.” A public weblog could let businesses provide that kind of service online. For instance, a fish and tackle shop owner might run a weblog where he passes along fishing topics, tips, and news — the same way he would when schmoozing with customers over the counter. Trellix added a blogging feature to its Web publishing platform this month, and Bricklin’s own weblog details many other ways small businesses can use blogs.

Weblogs’ ease of publishing has a disadvantage: Because it’s so easy to post information, blogs grow quickly and become unwieldy, making it harder and harder to track down relevant information. That’s because most blogs are organized solely in chronological order, with the most recent posts at the top and older content stacked in an archive, like a pile of old newspapers.

One way to manage the problem is through a search engine. As Network World recently reported, Phillip Windley, CIO of the state of Utah, has offered a weblog tool called Radio to any state employee who wants it. The addition of a Google Search Appliance makes the content of these blogs easily accessible to anyone. Employees haven’t exactly been jumping on the offer, but over time, Windley says, he hopes these blogs will develop into a “state knowledge base.”

For true knowledge management, however, a search engine probably won’t be enough. Traction’s weblog product for businesses, called TeamPage, tries to address that problem by adding sophisticated categorization and information-retrieval tools. These let users pull out all the information related to a particular project and view it on a single page — even if that information spans several years’ worth of posts from different users. Traction also adds access control, so that only authorized users can view information designated as sensitive — an essential element for corporate users. On the downside, TeamPage is more complicated to use than plain-vanilla weblogs.

For now, few businesses — apart from self-promoting independent consultants and weblog software vendors — operate weblogs as a matter of course; Utah and Verizon are the exception rather than the rule. But as blogging becomes more mainstream, that will change. Like PCs, instant messaging, and handheld computers, your company’s first blogs may well sneak in under the radar of IT, set up by enterprising employees who just want to get something done. This revolution may not be televised, but it will be blogged.

Link: Blogging for Dollars

Link broken? Try the Wayback Machine.

Carte Blanche for Hackers

Some recently proposed legislation could open up computer networks to vigilante-style justice.


There’s never been a better time to be a hacker. Twice last week, Bush administration officials reached out to the hacker community, asking attendees at two different Las Vegas conferences to be responsible citizens and report any vulnerabilities they discover to both software makers and the government. Bush’s computer security adviser, Richard Clarke, even suggested possible legal protections for hackers who act in good faith rather than trying to exploit vulnerable networks out of sheer malice. (Considering that the Department of Justice recently reported 400 laptops missing and unaccounted for, he might also have asked hackers to keep an eye out for laptops with FBI logos.)

But government work isn’t the only option open to hackers these days. If a bill recently introduced by U.S. Rep. Howard Berman passes, hackers will be able to find plenty of gainful and glamorous employment in the movie and music industries. That’s because the California Democrat’s bill would give copyright holders legal immunity for hacking peer-to-peer file-trading networks that infringe on their copyrights. It’s akin to saying that since entertainment companies have been wronged, they should now be free to put together a posse and exact a little technological revenge.

Understandably, the bill is a little unsettling to people who run corporate IT departments, even those who operate legitimate, secure corporate networks. Why? Berman’s bill is so broad that it could lead to a rash of hacking activity. If passed, it would allow copyright holders to go after perceived infringements through almost any means necessary, as long as they don’t actually delete or modify files on someone’s PC. “Spoofing” P2P networks with bogus files? No problem. Posting deliberately malicious code on a P2P network? Green light. Denial-of-service attacks? Go ahead — even if it brings down the whole network, causing problems for legal and illegal users alike.

Given the broad scope of copyright law, everyone holds some kind of copyright. Even your personal e-mails are copyrighted implicitly, as soon as you put fingers to keyboard. So all of the above attacks would be open not only to record and movie companies but also to lunatic fringe religious groups, disgruntled former employees, and terrorists.

And even if you think your corporate network is free of P2P file-trading, you may be mistaken. Some file-sharing tools, like LimeWire and Kazaa, can work through firewalls. Your employees may be trading files without your even knowing it. So that guy in your payroll department who’s been trafficking in Britney Spears videos on the Q.T. could trigger a denial-of-service attack, causing a crippling amount of traffic on your company’s Internet connection and bringing e-business as usual to a grinding halt.

It’s one thing to encourage hackers to be responsible about computer vulnerabilities they’ve discovered. But giving them carte blanche to hack into P2P networks, under the guise of protecting intellectual property, seems a bit over the top. The people who own copyrights on digital entertainment are understandably worried about online piracy, but that doesn’t justify turning the Internet into a Wild West of vigilante vengeance.

What can you do? First, network management software (such as HP OpenView, IBM’s Tivoli, or Computer Associates Unicenter) can tell you whether employees are using your network for file-trading. Let employees know what your company’s policies are for acceptable use of the network. If file-trading presents a problem, or you’re worried about its potential security impact, shut it down. If your current network security system doesn’t let you throttle back or cut off P2P network traffic altogether, it’s time to upgrade.

Also, an intrusion-detection system such as Cisco IDS, Internet Security Systems RealSecure, or Enterasys DragonIDS — all of which alert network administrators to security threats — can help protect against denial-of-service attacks and also help you diagnose problems and find fixes after such an attack happens. If you’ve been attacked or are concerned about DOS attacks, you need intrusion detection on top of your firewall or other existing network security device.

Finally, Berman’s bill isn’t the law — yet. Let your congressional representatives know what you think about it, while there’s still time to make a difference. The Electronic Frontier Foundation has more information about the bill and a form letter you can send to your representative.

Link: Carte Blanche for Hackers

Link broken? Try the Wayback Machine.

Importance of Knowing Who’s Who

Mention directory servers to the average person and you’ll get either a blank stare or a knowing look and a yawn. That’s because these servers, which manage lists of users on a computer network, play a decidedly prosaic role within corporate information systems. But as we enter the increasingly interconnected world of Web services, directory servers — newly dubbed “identity-management solutions” — will be critical.

First, let’s get the terminology out of the way. “Directory servers” are essentially glorified company phone books that list employees, departments, business partners with authorized access to your network, or customers with accounts on your corporate website. A directory server is like the person standing in front of a trendy nightclub with a clipboard, waving the VIPs in and telling everyone else to stay behind the velvet rope. (The firewall — and related security technology — is like the bouncer who makes sure no one crashes the party after having been turned away.)

The directory market is controlled by Novell’s eDirectory and Sun Microsystems’s Sun One Directory Server, with competition from Microsoft, IBM, and Oracle. Standardization of how directories store and retrieve information means that the market has become commoditized, and prices are extremely low — often, you’ll get a directory server for almost nothing when you buy other servers (say, Web and application servers) from the same vendor. Despite the standards, there’s no question that integration and maintenance are simpler for your IT staff when everything is on the same platform, so directories are a stalking-horse for the rest of a vendor’s line of servers. Scott McNealy and Bill Gates, chairmen of Sun and Microsoft respectively, “have the view that he who owns the directory owns the account,” says Scott Silk, VP for marketing at ePresence , a provider of directory and identity-management consulting services.

But tracking who’s who is getting more complicated. As Web services help companies become more and more interconnected, it can be a chore just to keep tabs on all the parties to a transaction. Therefore, directories are gradually evolving into more capable identity-management systems, which let companies track a whole range of customer, employee, and business partner identities — and make changes to those identities quickly when, for instance, an employee is hired or fired. Naturally, identity management is more expensive than mere directory services: ePresence typically charges between $250,000 and $500,000 to design, build, and manage a full-blown identity-management system.

What’s more significant, perhaps, is that identity-management systems are gradually gaining the ability to talk with one another through the infrastructure provided by competing Microsoft’s Passport and the competing Liberty Alliance project — a coalition of corporations, led by Sun, that released its first identity-management specification last week at the Burton Group’s Catalyst conference in San Francisco. Passport and Liberty simplify Web commerce by enabling companies to exchange information about their customers’ identities. For example, when you buy a plane ticket on an airline’s website, the airline may want to refer you to a particular hotel chain for a preferred rate. Instead of making you log in a second time once you’re at the hotel’s site, the airline’s directory servers could simply tell the hotel’s servers who you are and that you’ve already logged on to the airline site (or, in the parlance of identity management, that you have been “authenticated”). The hotel could then pull up its own records on your account.

If you’re starting to get a little nervous about consumer privacy, you should be. Microsoft says Passport will allow companies to share information about customers only if those customers have previously authorized it, and Liberty doesn’t yet support the exchange of consumer data other than individuals’ names and whether they have been authenticated. But there’s no doubt that such technologies will eventually make it easier for companies to share detailed customer data.

Before all this can happen, however, directories and identity-management systems need to get better at exchanging authentication information with one another. This process is under way, with standards such as the secure authentication markup language (SAML) and Web services security (WS-security) nearing completion. But it will likely be several years before the sharing of such information is truly seamless. In the meantime there’s a bewildering array of potential standards for your IT staff to choose from. For the near future, expect delays, kinks, and hitches in any identity-management project that involves sharing information with other companies or with directories in other parts of your organization.

For companies, however, exchanging identity information is a powerful boon, and it will be key to making business-to-consumer and business-to-business commerce flow more smoothly. After all, if you don’t know who you’re doing business with, how can you even get started? Directories, and now identity management, will help ensure that people really are who they say they are.

Link: Importance of Knowing Who’s Who

Link broken? Try the Wayback Machine.