Hadoop Summit 2012 Highlights

The fifth annual Hadoop Summit brought an estimated 2,100 attendees to the Convention Center in downtown San Jose, Calif., last week. The two-day, big-data event was hosted by Yahoo, Hadoop's first large-scale user, and Hortonworks, a leading commercial support-and-services provider.

Among the announcements coming out of this year's summit were updates from the three leading commercial Hadoop distributors. Hortonworks unveiled the first general release of its Apache Hadoop software distro, Hortonworks Data Platform (HDP) 1.0, a day before the start of the show. The company bills the open source data management platform as "the next generation enterprise data architecture." Built on Apache Hadoop 1.0, this release includes a bundle of new provisioning, management, and monitoring capabilities built into the core platform. It also comes with an integration of the Talend Open Studio for Big Data tool.

Cloudera got a big jump on the competition by announcing a new release a week earlier, but the company showed on its new CDH4 and Cloudera Manager 4, which are part of Cloudera Enterprise 4.0, at the show. Version 4 of CDH, the company's open source Hadoop platform (on which Enterprise 4.0 is built), expands the number of computational processes executable under Hadoop and introduces a new feature designed to software programs to be embedded within the data itself. Dubbed "coprocessors," these programs are executed when certain pre-defined conditions are met.

MapR Technologies showed off version 2.0 of its Hadoop distro, the first to support multi-tenancy. The new version also comes with advanced monitoring management tools, isolation capabilities, and added security. MapR is offering this release in a basic edition (M3), and an advanced edition (M5). The MapR Hadoop Distribution M3 supports HBase, Pig, Hive, Mahout, Cascading, Sqoop and Flume. The M5 edition adds high availability features and additional security tools, including: JobTracker HA, Distributed NameNode HA, Snapshots and Mirroring.

Also, VMware launched a new open source project codenamed "Serengeti" at the show. The Web site describes the project's goal "to enable the rapid deployment of an Apache Hadoop cluster... on a virtual platform." VMware says the project aims to produce a virtualization-aware Hadoop configuration and management tool. VMware is partnering with Cloudera, Hortonworks, MapR and big data analysis company Greenplum on this project.

Apache Hadoop is an increasingly popular, Java-based, open-source framework for data-intensive distributed computing. They system is designed to analyze a large amount of data in a small amount of time. At its core, it is a combination of Google's MapReduce and the Hadoop Distributed File System (HDFS). MapReduce is a programming model for processing and generating large data sets. It supports parallel computations over large data sets on unreliable computer clusters. HDFS is designed to scale to petabytes of storage and to run on top of the file systems of the underlying OS.

Attendance at this year's Hadoop Summit set a record. The first event, held in 2008, drew an estimated 500 attendees. The Summit's sponsorship roster underscores the growing importance of the data analysis platform. Cisco, Facebook, IBM, Microsoft and VMware were among the heavy hitters adding their support to the event; there were 49 event sponsors total.

Speaking at the conference, Facebook engineer Andrew Ryan talked with attendees about his company's record-setting reliance on the HDFS clusters to store more than 100 petabytes of data. During his talk, Ryan explained how Facebook has worked around Hadoop's key weakness: its reliance on a single name server (Namenode) to send and receive all filesystem data via a pool of Datanodes. If a Datanode goes down there's little impact on the cluster, but if Namenode goes down, no clients can read or write to the HDFS. The fix: AvatarNode, a piece of software designed to provide a backup Namenode. Ryan laid out the details from his talk in a blog post.

Posted by John K. Waters on June 18, 20120 comments


JNBridge 'Lab' Helps .NET Devs With Hadoop

JNBridge, maker of tools that connect Java and .NET Framework-based components and apps, released a free interoperability kit for developers looking for new ways of connecting disparate technologies on Monday. This second JNBridge Lab demonstrates how to build and use .NET-based MapReducers with Apache Hadoop, the popular Java-based, open-source platform for data-intensive distributed computing.

The company began offering these kits in March. The first JNBridge Lab was an SSH Adapter for BizTalk Server designed to enable the secure access and manipulation of files over the network. This new Lab aims to provide a faster and better way to create heterogeneous Hadoop apps than other current alternatives, the company claims. All of the Labs come with pointers to documentation and links to source code.

The new Hadoop Lab shows developers how to write .NET-based Hadoop MapReducers against the Java-based Hadoop API, which avoids the overhead of the Hadoop streaming utility. The resulting .NET code can run directly inside Hadoop processes.

"Streaming works," said JNBridge CTO Wayne Citrin, "but it's kind of thin gruel. It really makes non-Java MapReducers into second-class citizens in the Hadoop world. You have to manage and configure a separate process. You have to parse the output and put it back together when you're done, which is another overhead cost. Then there's the overhead of going through sockets. It's not surprising that not that many people actually use .NET in this case."

The code provided in the Hadoop Lab can be run as an example, Citrin explained, or it can be used as a design pattern for users to develop their own Hadoop apps using C# or VB.NET.

JNBridge started its Labs project started earlier this year as part of the company's 10-year anniversary celebration.

"It was a way of showing people how to use the out-of-the-box functionality of JNBridgePro to do useful things that they may not have thought of, or that don't exist out there as products," Citrin said.

The company's flagship product, JNBridgePro, is a general purpose Java/.NET interoperability tool designed to bridge anything Java to .NET, and vice versa, allowing developers to access the entire API from either platform. Last year the company stepped into the cloud with JNBridgePro 6.0.

Why would anyone want to build MapReducers in .NET?

"For the same reasons you would want to use JNBridgePro in the first place," Citrin said. "Your organization might have .NET-based libraries they need or want to use in a Hadoop application. Your company might have more people skilled in .NET than Java. Or you might be working with Windows Azure, which supports Java, but the .NET tooling is better."

Citrin confesses that developers have yet to begin trampling each other to download the JNBridge Labs, but there has been enough interest and feedback to keep the project going.

The JNBridge Labs are available for download, free from the company's Web site. Although the kits are free, they require a JNBridgePro license for use beyond the trial period. The company announces new Lab releases on its blog.

Posted by John K. Waters on May 21, 20120 comments


Architect Spotlight: Brian Noyes, High Flying F-14 Vet Turned .NET MVP

Brian Noyes didn't set out to become a software architect. He started writing code "to stimulate his brain," while he was flying F-14 Tomcat fighter aircraft for the U.S. Navy. As his software expertise developed, he found himself "going down a technical track" managing onboard mission computer software in the aircraft, and later, systems and ground support software for mission planning and controlling satellites.

"It was just a hobby," Noyes says, "but it led me to work that I still love to do."

Noyes left the Navy in 2000 and today is chief architect at IDesign, a .NET-focused architecture, design, consulting, and training company. He's also a Microsoft Regional Director and an MVP, and the author of several books, including: Data Binding with Windows Forms 2.0: Programming Smart Client Data Applications with .NET (Addison-Wesley Professional, 2006) and Developer's Guide to Microsoft Prism 4: Building Modular MVVM Applications with Windows Presentation Foundation and Microsoft Silverlight (Microsoft Press, 2011).

Noyes specializes in smart client architecture and development, presentation-tier technologies, ASP.NET, workflow and data access. He writes about all these topics and more on his blog, ".NET Ramblings."

Not surprisingly, Noyes is a fan of Microsoft's Extensible Application Markup Language (XAML). He says Microsoft got a lot of things right when it created this declarative, XML-based language for the .NET Framework back in 2005/2006.

"XAML provides a clean separation between the declarative structure and the code that supports it," Noyes says. "That can either come in the form of the code-behind that's inherently called to it in the way Visual Studio does it, or using the Model View ViewModel (MVVM) pattern to have even better separation. They put mechanisms into the bindings and control templates and data templates that just give you this nice separation of things -- if you want them." "

They really facilitated both ends of the spectrum," he continues. "They made it so you have a drag-and-droppy, RAD-development kind of approach, where you're not so concerned about the cleanliness of the code and how maintainable it is and you just want to get it done. Or, if you're more of maintainability Nazi, as I am, and want absolutely clean code and separation of concerns and things like that, it facilitates that as well."

XAML shipped with the .NET 3.0, along with the Windows Presentation Foundation (WPF), of which Noyes is also a fan. "One thing I always say about WPF is that they did a darned good job of getting it right the first time," he says, "because, since the first release, there has been very little change to the core framework. Whereas with Silverlight they've had to do substantial improvements with each release to inch it up closer to what WPF was capable of."

Noyes explores uses for all of these tools and technologies in his sessions scheduled for upcoming Visual Studio Live! conferences. "For events like this, it's about giving them knowledge they can take home and use in the trenches the very next day," he says. "I try to keep things close to the code."

Posted by John K. Waters on May 11, 20120 comments


Looking Back at CSLA .NET Framework's Open Source History

When the CSLA .NET framework made its first appearance in a book written by its creator, Rockford Lhotka, back in 1998, it was little more than a hunk of sample code -- at least that's how he saw it. But readers of that extremely popular book, VB6 Business Objects, saw it as something more.

"That first implementation was not really a framework per se," Lhotka recalls. "But after I published the book, I would get these e-mails from people who would say, 'Hey, I bought your book and I was using your framework and I wish it did this,' or, 'Your framework has a bug.' Initially I would respond that I don't have a framework. Over time I gave in and decided, hey, maybe I do have a framework."

Today CSLA is one of the most widely used open source software development frameworks for .NET. It's designed to help developers build a business logic layer for Windows, Web, service-oriented and workflow applications.

"It helps developers create a set of business objects that contain all of their business rules in a way that allows those object to be reused to create many different kinds of user interfaces or user experiences," Lhotka explains. "And once you've created this business layer using CSLA, you can create a WPF interface, a Silverlight interface, a Web interface, or a service interface on top of it."

"But then it gets even more interesting," he continued, "because those same objects can work on a Windows Phone, an Android device, and the new Windows Runtime (WinRT). Even if you're not building distributed applications (which most developers are these days), the CSLA framework gives an application a lot of structure and organization, which leads to long-term maintainability."

Lhotka (Rocky to his friends), CTO of Magenic, will be holding workshops on "Full Application Lifecycle with TFS and CSLA .NET" at the upcoming Visual Studio Live! New York and Visual Studio Live! Redmond conferences, as well as sessions about other topics. Lhotka is both a Microsoft Regional Director, which is a designated technical expert and community leader who's not a Microsoft employee, and an MVP (Microsoft Most Valuable Professional).

Lhotka created the .NET implementation of CSLA in 1999. The framework was originally conceived in 1996 in the world of Microsoft's Component Object Model (COM) and Visual Basic 5, and dubbed "Component Based Scalable Logical Architecture." But when Lhotka re-implemented it for .NET, which is not component based, the name "CSLA" became "just an unpronounceable word," he says.

CSLA .NET is currently in version 4.2, which supports Visual Studio 2010, Microsoft .NET 4.0, Silverlight 4 and Windows Phone 7. Version 4.2 and higher supports Android, Linux and OS X through the use of Mono, MonoTouch and Mono for Android.

More information about the CSLA framework, including a FAQ page, a download page, documentation, and a blog, can be found on Lhotka's Web site here.

Posted by John K. Waters on May 7, 20121 comments


'Big Data' Definition Evolving with Technology

While there's lots of talk (a lot of talk) about big data these days, according to Andrew Brust, Microsoft Regional Director and MVP, there currently is no good, authoritative definition of big data.

"It's still working itself out," Brust says. "Like any product in a good hype cycle, the malleability of the term is being used by people to suit their agendas."

"That's okay," he continues, "There's a definition evolving."

Still, Brust, who will be speaking about big data and Microsoft at the upcoming Visual Studio Live! New York conference, says that a few consistent big data characteristics have emerged. For one, it can't be big data if it isn't...well...big.

"We're talking about at least hundreds of terabytes," Brust explains. "Definitely not gigabytes. If it's not petabytes, we're getting close, and people are talking about exabytes and zettabytes. For now at least, if it's too big for a transactional system, you can legitimately call it big data. But that threshold is going to change as transactional systems evolve."

But big data also has "velocity," meaning that it's coming in an unrelenting stream. And it comes from a wide range of sources, including unstructured, non-relational sources -- click-stream data from Web sites, blogs, tweets, follows, comments and all the assets that come out of social media, for example.

Also, the big data conversation almost always includes Hadoop, Brust Says. The Hadoop Framework is an open source distributed computing platform designed to allow implementations of MapReduce to run on large clusters of commodity hardware. Google's MapReduce is a programming model for processing and generating large data sets. It supports parallel computations over large data sets on unreliable computer clusters.

"The truth is, we've always had Big Data, we just haven't kept it," says Brust, who is also the founder and CEO of Blue Badge Insights. "It hasn't been archived and used for analysis later on. But because storage has become so much cheaper, and because of Hadoop, we can now use inexpensive commodity hardware to do distributed processing on that data, and it's now financially feasible to hold the data and analyze it."

"Ultimately the value Microsoft is trying to provide is to connect the open-source Big Data world (Hadoop) with the more enterprise friendly Microsoft BI (business intelligence) world," Brust says.

Posted by John K. Waters on April 10, 20121 comments


Developers: Gesture and Audio Input Are in Your Future

It may not happen tomorrow, but sooner or later you're going to find yourself writing multitouch, gesture- and audio-input-based applications, Tim Huckaby declared during his day two keynote at the Las Vegas edition of the Visual Studio Live! 2012 developer conference series.

"I'm old enough that I remember when using a mouse was an unnatural act!" Huckaby told a packed auditorium at the Mirage hotel on Wednesday. "Now it's second nature. I'd argue that some of this voice- and gesture-capable stuff will be just as natural in a few short years."

Huckaby's keynote focused on human interactions with computers in non-traditional "natural-type" ways -- sometimes referred to as the Natural User Interface, or NUI -- and how it will impact the lives of .NET developers.That's something of a specialty of his Carlsbad, Calif.-based company, InterKnowlogy, which has delivered dozens of large WPF, Silverlight, Surface and Windows 7 Touch applications to clients across the country. He also founded a company, Actus, that specializes in interactive kiosk applications.

In a lively keynote during which he interacted with various gesture- and audio-based applications by flailing his arms and shouting commands, Huckaby argued that multitouch is now cheap, consumer-grade technology that everyone already wants.

"It's now cheap to do multitouch," Huckaby said. "And it improves usability, incredibly. You will see every computing device from here on in -- whether it's a smart phone or your desktop -- every one of them will be multitouch enabled."

To illustrate the pace of NUI evolution, Huckaby demonstrated a 3D application built by his company in early 2007 for cardiac surgeons that allows the user to manipulate the heart image via a touch screen. He contrasted that app with a similar one InterKnowlogy developed recently based on Microsoft's Kinect motion sensing input device.

"This was prototyped in a couple of weeks, and it's just .NET," Huckaby said.

He also demonstrated a touch-screen craps table built by his company that interacts with real-world objects. The bets were activated with physical chips laid down on the screen and "dialed" to establish the size of the bet, and the dice were actual transparent cubes that, when tossed, registered on the board.

The keynoter drew good-natured laughter from his audience as he waved his arms and strained a damaged rotator cuff to demo a physical therapy application designed to track a patient's movements through prescribed exercises and display them on a screen in real time. The application provided feedback to help the patient get the movements right. The application was based on Kinect, which Huckaby said is currently the world's fastest selling consumer electronics device.

The audience was also treated to a video about a neural computer interface, a spider-like contraption worn on the head, which was used to send commands to a wheelchair. Huckaby said the software for the device could be built with .NET right now. He wrapped up his demos with a video of an application that supported physical interactions with virtual objects. He called the C3-based app "a first go at the Holodeck" from Star Trek. He also showed off a game-based app developed for NASA.

"It's time for all of you to start thinking about building applications that use a Natural User Interface," Huckaby told the crowd. "Gesture is coming, fast; multitouch is here. And we might not be thinking commands at computers just yet, but we'll be doing that, too. It's just a matter of time."

Posted by John K. Waters on March 29, 20120 comments


New AppDev Alliance Launches: The App Economy's 'Connective Tissue'

Last month, the CEO network at Technet.org published a study, titled "Where the Jobs Are: The App Economy," that puts the number of jobs generated in the U.S. by the so-called app economy in the last four years somewhere near the half million mark. The organization, which bills itself as a bipartisan political network of senior executives focused on promoting the growth of "technology-led innovation," concluded the following: "The incredibly rapid rise of smartphones, tablets and social media, and the applications -- 'apps' -- that run on them, is perhaps the biggest economic and technological phenomenon today."

That conclusion came as no surprise to Jake Ward, head of communications for the newly formed Application Developers Alliance. His nascent organization is only a few weeks old, but it has been under development for a couple of years.

"That work involved a lot of research -- a lot of focus groups, surveys and conversations with individual developers and the companies that care about them," Ward told me. "One thing that was prevalent in all of their answers, and the overarching theme of every conversation was, Wow, there are a lot of apps out there!"

The Washington, DC-based Apps Alliance, as its growing membership calls the organization, is a nonprofit support, education, and advocacy group "committed to helping developers test and ship great ideas," the Web site says. Launched earlier this year, the Alliance membership currently comprises both individual developers (about 55 percent) and corporate members (about 45 percent). Developers of every stripe are welcome, Ward says.

"We are as agnostic as we can possibly be," he says. "If you are a developer -- whether you're an independent app builder or an enterprise programmer -- and you see value in the organization and want to participate, we want you," Ward says. "If you're an enterprise software developer by day, you might be a Python coder by night. It only matters to us that that's what you want to do. The next great way to build an app can come from anywhere."

The cornerstone of the org's member benefits is its Alliance Network, which is a social network for members only. It's designed to allow developers to collaborate, to find each other, to have discussions on message boards and to engage with corporate members through dedicated landing pages to which they can subscribe (it looks something like Facebook Fan Pages meets LinkedIn Groups).

The Alliance plans to deliver the rest of its member services through that network. Among those are an education platform, a certification platform and a worldwide events aggregator. The group is also putting together a value-added program that includes things like discounts on services, access to events, scheduling of events, the network itself and the educational programs. As of this writing, the Services page lists a discount on Mobile App Tracking from HasOffers; $10 off registration with Digitalmusic.org's Music Startup Academy; free app privacy policy from PrivacyChoice; 10,000 free impressions from Swappit; and a 10 percent discount on services from cloud hoster Rackspace.

The Alliance is the brain child of attorney Jonathan Potter, who also founded the Digital Media Association, where he served as executive director for about 12 years.

Individuals can join for free by registering on the Alliance Web site. Ward says individual memberships will be free for the foreseeable future. The Alliance will be funded by the annual dues of corporate memberships, but Ward vows that the developers are going to "drive the bus."

There's a lot to like about the Apps Alliance, but let's face it, there are a lot of developer-focused organizations out there. Do we really need another one?

"We believe that the formation of the Apps Alliance is an essential step toward normalizing the apps industry so that it's trajectory continues upward, with no slowing, no plateauing, just a continuous driver of innovation and standardization and the rising tide of the ecosystems," Ward says.

"The mission of this organization," he adds, "is to be the connective tissue of the industry." 

More information on the Application Developers Alliance is available on the organization's Web site. Stop by and watch the intro video, and then let us know what you think.

 

Posted by John K. Waters on March 22, 20120 comments


Spring Hadoop Fits Neatly Under the Spring Data Umbrella 

VMware's recent announcement of an integration of its Spring Framework with Apache Hadoop is aimed at making life easier for enterprise Java developers who want to use the popular open-source platform for data-intensive distributed computing. The new Spring Hadoop is a lightweight framework that combines the capabilities of the Spring framework with Hadoop's ability to allow developers to build applications that scale from one server to thousands and deliver high availability through the software, rather than hardware.

By integrating the Hadoop Framework, a Java-based, open-source platform for the distributed processing of large data sets across clusters of computers using a simple programming model, with the Spring Java/J2EE application development framework, VMware has created a project that fits neatly under the Spring Data umbrella. The open-source Spring Data project comprises a group of sub-projects seeking to make it easier to develop apps that use a bunch of new data access technologies, such as non-relational databases, cloud-based data services and MapReduce frameworks like Hadoop.

In addition to Apache Hadoop, the list of Spring Data sub-projects includes, among others, the Spring Data JPA, which simplifies the development of Java Persistence API-based data access layers; VMware's GemFire distributed DB management platform; the Redis advanced key-store; and the MongoDB document-oriented database.

The new framework also supports comprehensive HDFS data access through such Java Virtual Machine (JVM) scripting languages as Groovy, JRuby, Jython and Rhino. HDFS (Hadoop Distributed File System) is designed to scale to petabytes of storage and to run on top of the file systems of the underlying OS.

The list of Spring Hadoop capabilities also includes: declarative configuration support for HBase; dedicated Spring Batch support for developing workflow solutions that incorporate HDFS operations and "all types of Hadoop jobs;" support for the use with Spring Integration "that provides easy access to a wide range of existing systems using an extensible event-driven pipes and filters architecture;" Hadoop configuration options and templating mechanism for client connections to Hadoop; and declarative and programmatic support for Hadoop Tools, including FsShell and DistCp.

Developer Costin Leau announced the integration on the SpringSource Community blog. "…Spring Hadoop stays true to the Spring philosophy offering a simplified programming model and addresses 'accidental complexity' caused by the infrastructure," he wrote. "Spring Hadoop, provides a powerful tool in the developer arsenal for dealing with big data volumes."

VMware has released Spring Hadoop under the open source Apache 2.0 license. It's available now as a free download.

Posted by John K. Waters on March 13, 20120 comments


MIME's Co-Creator Reflects on Past, Discusses a Cloud Future

The Multipurpose Internet Mail Extensions (MIME) specification that defines the way multimedia objects are labeled, compounded and encoded for transport over the Internet turns 20 this month. Ned Freed and Nathaniel Borenstein were the two primary authors of the spec. Borenstein, who worked at New Jersey-based Bellcore at the time, sent out the first real MIME message on March 11, 1992. That message included an audio clip of the Telephone Chords, an all-Bellcore barbershop quartet featuring John Lamb, David Braun, Michael Littman and Borenstein, singing about MIME to the tune of "Let Me Call You Sweetheart."

"Those of you not running MIME-compliant mail readers won't get a lot out of this," Borenstein wrote in that message.

Are there any non-MIME-compliant mail readers today?

Borenstein, who is today Chief Scientist for cloud-based email management company Mimecast, was in Silicon Valley recently to speak at the Cloud Connect conference. I grabbed a few minutes with him when he stopped in at the Computer History Museum in Mountain View. We got the two questions he's asked most often out of the way first.

"Everybody assumes I founded this company, but when I joined, it was six years old," he said. "When I first heard the name, before I ever thought about working there, I thought, they can't do that! But as I learned about the company, I found that I loved what they were doing and I liked the people a lot. And it certainly doesn't hurt them to have the author of MIME working at Mimecast."

Borenstein says most people also want to know if he ever thinks about how much money he would have made if he'd had some sort of financial stake in the now-ubiquitous Internet standard for multimedia data. "They ask me, 'Have you ever thought about what it would be like if you got a penny for every time MIME was used?' The answer is, yeah. It's hard to be precise, but I'd estimate that MIME is used about a trillion times a day. My current income would be roughly the GDP of Germany."

Borenstein joined Mimecast in 2010 after spending eight years at IBM as a distinguished engineer. His duties include long-term product planning, external writing and speaking, and patent strategy and submissions.

"When I joined the company, it had never filed a patent, because all the principals believe, as I do, that patents are deeply evil," he said. "Unfortunately, I had to point out that it's also true that deeply evil people can hurt you, and you really have a responsibility to protect yourself. Our patent strategy is primarily defensive."

Mimecast filed for a patent last year to support a cloud-enabled e-mail analytics system it is developing. The company's flagship service provides cloud-based e-mail management for Microsoft Exchange. That service includes e-mail archiving, continuity and security. It unifies "disparate and fragmented e-mail environments into one holistic solution that is always available from the cloud," the company says on its Web site.

"The cloud makes it possible for companies of a size that could never really contemplate it before to make practical and valuable use of big data and business analytics," Borenstein said. "They can take all that data and finally use it for something besides a dead repository."

In fact, Mimecast's new e-mail analytics system, which he called "proactive e-mail," takes on that very problem. If the demo he showed me is any indication, it could go a long way toward solving the so-called organizational memory problem.

"In any large organization, there's always someone who knows what you're trying to find out," he said, "and yet finding that information is almost always harder than rediscovering it. This is where I see the cloud going: supporting value-added apps that dig into those company archives and bring your own information back to you so that you can use it."

Borenstein is an energetic and positive guy, and he seems to like the work he's doing now at Mimecast very much. But he does miss the days when pure research labs like the one that spawned MIME weren't so uncommon.

"Labs like Bellcore, which was an institution of nearly pure research, are rare birds these days," he said. "And we all suffer for that rarity. After all, MIME grew out of a simple mandate to come up with something that would increase bandwidth usage."

"People would ask me," he added, "why are you working so hard on getting pictures into e-mail? And I'd say, someday I'm going to have grandchildren, and I want to be able to get pictures of them by e-mail. And they would laugh, because back in the 1980s that was too far-fetched."

Borenstein showed me the first photo sent to him in an email by his daughter of his twin granddaughters: an ultrasound image of a cluster of cells.

"The thing I had envisioned all those years ago was supposed to be much cuter," he said.

On March 5, ACS, the corporate successor to Bellcore, celebrated the twentieth anniversary of MIME at its New Jersey headquarters with, among other things, a reunion of the Telephone Chords. Borenstein said he was practicing "so I don't miss the notes this time." I couldn't make it to the event, but the original message featuring the Telephone Chord's singing their MIME song in four-part harmony is available on Borenstein's "MIME & Me" Web page.

 

Posted by John K. Waters on March 9, 20120 comments