Not sure how many people have seen the movie La Vallee, a French film that came out in 1972. Only reason I saw it is because I had previously discovered the most excellent soundtrack for the movie: Pink Floyd’s Obscured By Clouds.
The movie opens up with some groovy music and a fantastic flyover of mountains in Papa New Guinea. When it gets to a particularly cloudy section of the mountains, the subtitles in the movie say the following:
We are above unexplored regions.
They are not on the map…
or more precisely, they are shown only as white spots.
It is the unknown.
In the first 12 minutes or so of the film, the area where they were flying over is shown on a printed map:
What is in this area, and why is the group of explorers trying to get here? Apparently, it’s paradise. The nearby natives think it is quite the opposite.
I think the natives are right. The Cloud is anything but a paradise. Especially if you expect your data to always be there, accessible only to you and those you allow to access to it.
How do you know know you’ll be able to get to your data? Forget the issue with the data being in the cloud for a moment, can you even get to where the data is stored with enough bandwidth to make that access feasible? With WiFi-only tablets like the Nexus 7, there is no guarantees. Your odds are better with 3g/4G tablets, but even then there are plenty of times and place where your data is Obscured By Cloud.
Let’s assume you can actually get to your data with a fast enough connection. What is keeping that data there, safe and secure? Your cloud provider is telling you that data is secure, but how do you really know they are doing what they say? How do you know they will be in business tomorrow? And if they go out of business (or their servers gets confiscated by some government entity), what happens next?
Granted, it’s unlikely that a big player like Amazon or Google will necessarily disappear tomorrow. However, the threat of data being inadvertently leaked or government entities seizing hardware is still very real. Or even the terms of service changing without notice.
If you truly want to be able to access data from anywhere, local storage always wins. The cloud definitely provides some new capabilities, but don’t lose sight–or control–of your data.
Undoubtedly, you've seen this video announcing Google Fiber. Very clever use of The Cars song Just What I Needed.
You might wonder: why do I need to be able to download and upload at speeds of up to 1 gigabit per second? Isn't that a bit excessive?
I remember the days before I knew about the Internet. I was dialing up bulletin boards on a 300 baud acoustic coupler modem. Everything was a text-only affair. The few times I downloaded anything, I expected a long wait that ties up both my computer and my phone line. For example, a 143k floppy disk image for an Apple ][ took nearly an hour, as I recall.
In college, I got my first taste of the graphical Internet--I had already had a taste of the text-based Internet by high school. I played with the first versions of NCSA Mosaic, Netscape, and other web browsers. On 19 inch color screens. On HP Unix Workstations. On T1 lines. It was a dream compared to the dialup and black and white screens I had to endure otherwise.
The first "broadband" I had at my house was an ISDN modem in the late 1990s. Cable Internet wasn't widely available back in those days, so that's what I could get. Twice the speed of a regular dialup modem (yes, I could get 2 channels back then).
I finally got something that fit the modern definition of broadband at home about 10 years ago. I finally moved someplace that offered Internet service through the local cable company. I can't remember exactly how fast it was, but it was on the order of single digit megabytes per second with very restricted upstream bandwidth--restrictions that have since lifted over time.
Today, on Comcast, my broadband speed test shows a nice, happy 35mb/s download and 5mb/s upload. And at least for what I do today, that seems fast enough.
But that is very short-sighted thinking.
Take the computer I was using with my 300 baud modem: an Apple ][. It ran with a 1 megahertz processor, it had up to 64k of RAM, and a couple of 143k floppies. Graphic capabilities were relatively limited. Sound was little more than a piezo speaker capable of rudimentary sounds.
Today, I can hold, in the palm of my hand, a battery-powered computer that runs with dual-core processors at gigahertz speeds, a gigabyte of RAM and gigabytes of storage capable of reproducing on a touch screen images taken by the device itself, playing any song in the world with excellent fidelity, and able to connect to a worldwide network of other computers.
All of these things are possible because people innovated, making the computers faster, cheaper, smaller, and connected via wireless networking. I don't know if any of the people who built the components that make up these devices had any idea the innovations those components would help bring about.
The same way we have no idea of the things that will be possible as we move from broadband connections of single digit megabits per second to hundreds and thousands of megabits per second.
We can guess or dream about the possibilities, of course. The only way to find out is to actually build it. While there are plenty of companies that wish to slow down the inevitable, namely the existing incumbent telcos and cable companies, it will be built. Whether Google does it or some other company does, it needs to be built.
So as the market and technology have evolved, we’ve decided to change our approach and replace our static 250 GB usage threshold with more flexible data usage management approaches that benefit consumers and support innovation and that will continue to ensure that all of our customers enjoy the best possible Internet experience over our high-speed data service.
In the next few months, therefore, we are going to trial improved data usage management approaches comparable to plans that others in the market are using that will provide customers with more choice and flexibility than our current policy. We’ll be piloting at least two approaches in different markets, and we’ll provide additional details on these trials as they launch. But we can give everyone an overview today.
The first new approach will offer multi-tier usage allowances that incrementally increase usage allotments for each tier of high-speed data service from the current threshold. Thus, we’d start with a 300 GB usage allotment for our Internet Essentials, Economy, and Performance Tiers, and then we would have increasing data allotments for each successive tier of high speed data service (e.g., Blast and Extreme). The very few customers who use more data at each tier can buy additional gigabytes in increments/blocks (e.g., $10 for 50 GB).
The second new approach will increase our data usage thresholds for all tiers to 300 GB per month and also offer additional gigabytes in increments/blocks (e.g., $10 per 50 GB).
In both approaches, we’ll be increasing the initial data usage threshold for our customers from today’s 250 GB per month to at least 300 GB per month.
I have to give Comcast some credit for this. Looking at increasing the cap–because, let’s face it, we’re all using the Internet more and more–and making those who actually use more bandwidth pay for it. The “overage” is even somewhat reasonable, unlike the overage you pay on your typical data plan for your mobile phone ($10 gets you one gigabyte instead of fifty).
To be fair, this isn’t entirely an iPad specific offering. The MicroSIM can be used in any device that supports a MicroSIM, so it will also work in the newer iPhones as well. Also, to get the 0.27 EUR a MB (which, FYI, is billed in 25k increments), you have to buy a 200MB bundle for 55 EUR. That 200MB is good for 30 days and you can get that rate in 44 countries.
I had tweeted this problem earlier on Twitter and found out that Comcast seems to have given up on its effective social media program, as Comcast Bonnie was no longer working there. She relied to me that “they got rid of me.” She was great at what she did, but I’ve seen this sort of thing before. A company has person doing great and important work, and it fires her because some bonehead at the company couldn’t monetize it. Apparently, it values bad PR instead like this. Accountants will eventually ruin all American business.
This is the tricky thing about “social media.”We know it’s good, but it’s hard to quantify exactly how good. When times get rough, it gets paired back or, in the cast of Comcast Bonnie, “eliminated.”
Unfortunately, human beings remember these bad experiences and use them as a basis to make decisions about which services to use in the future. Unfortunately, cable is the only real choice for most people so Comcast can pretty much take on the whole “we don’t care, we don’t have to” mentality on these things.
So I scheduled the service guy to come on Tuesday and just figure I’d limp along at analog modem speeds. In the process, I checked by email and saw a note from one of the editors of my blog, Sergio Gasparrini, who apparently listened to the podcast—from Europe—and suggested that Mother’s Day Skype calls may have been the culprit. I thought this was laughable until mid afternoon when my speeds began to increase by the hour.
By 9 p.m. on Sunday, the speed had ratcheted back up from 1 Mbps around 5 p.m. to 3 Mbps and then increased to 4 Mbps to 9 Mbps to 11 Mbps. It was like clockwork. As I write this, the system has been restored to full speed by itself.
This seems plausible, but only barely. Skype and other Voice over IP tools do not require a lot of bandwidth. It does require low latency, though. The only possible explanation here would be if there were a significant number of video calls–which require both high bandwidth and low latency.
In any case, this is definitely something I remember growing up on the Bell System. Mothers Day was always a big calling day. “All circuits are busy” messages were pretty common. What scares me is how quickly we all forget…
IPv6 is the next generation of IP–the protocol by which most of our computers, phones, and other related devices talk to each other and to the Internet. Today, everything generally talks using IPv4, which has a 32-bit address space, or roughly 4 billion possible addresses. Both because of the sheer number of devices and the number of “reserved” addresses within the IPv4 space, the number of globally available IP addresses is running out.
To put it in perspective, as I write this, there is still a few /8 addresses unallocated by the IANA, which are distributed to regional registries, which are then responsible for distributing the IPs to ISPs, whom in turn distribute them to you. A /8, in IPv4, is 16,777,216 IP addresses. That seems like a lot of addresses, until you realize that, depending on how those IPs are allocated, the number of usable IPs ends up being a bit less.
Even so, once IANA runs out of /8s, the individual registries and ISPs still likely have caches of IPv4 addresses. The problem of address space exhaustion probably won’t show any acute symptoms immediately, but the lack of IPv4 addresses (and the lack of wide deployment of IPv6) will start causing problems soon, creating pockets of servers that can only be accessed by one protocol or another.
We’ve actually been working around the problem of address exhaustion in the IPv4 space for some time now using network address translation. That router you get from your local consumer electronics store has been masquerading all of your computers behind a single, public IP address, providing you both a level of protection and connectivity.
Enterprises do much the same thing, except their boxes are significantly larger and they also might provide services accessible on the Internet, which means: they need more than one public IP. Also, some enterprises have so many connected systems that they have, quite literally, run out of available private IP addresses (some IPs in the IPv4 space are set aside explicitly for private, non-Internet connected use).
In any case, the pressure is mounting to switch to IPv6. Given that some of my customers are asking about IPv6, I figured I’d get myself educated. I happen to have access to one of the people who helped define the IPv6 standards in the IETF (he works at Check Point), but there’s really no better way to learn about it than to just get it set up.
Of course, part of the problem right now is that my ISPs at home (Comcast, CenturyLink) are still serving me IPv4 addresses. Fortunately, there are ways of tunneling over IPv4 to the IPv6 networks. One such service is TunnelBroker, run by the folks at Hurricane Electric. They tunnel IPv6 packets inside of IPv4 packets (more specifically using IP Protocol 41, designed for this purpose).
I had it working on an old Linksys router I had flashed with TomatoUSB and hacked a bit. I had IPv6 flowing through my network and was able to reach a few sites over IPv6. Then I had the realization that I was no longer protected by my router. I was now directly reachable–without a firewall! While I could fix that, I think that’s enough experimentation for now.
I guess the point is: I can make it work today. However, few people are going to want to do what I had to go through to make it work. Every hop in the network has to be IPv6 friendly and IPv6 enabled. For the home user, it’s going to have to be as simple as plugging in a router. We’ll get there, but it’s going to be a bumpy ride for the next few years.
The operators (at least ones in North America) like to sell things in buckets. A certain number of text messages. A certain number of voice minutes. Mere mortal understand these things and can make rational decisions about how many of each they want.
Data is different. It used to be that most of the US operators sold it “unlimited” (or at least unmetered). Now they are making it more like the voice and text messages: a certain amount of megabytes (or gigabytes).
Many people likely to read this blog post have a vague idea of what a megabyte or gigabyte represents. The unwashed masses, however, have no clue what any of this means, nor do they want to. They just want to do their thing.
AT&T, to their credit, has a Data Calculator on their web site that will allow you to estimate how much data you need to purchase, given various activities on your mobile phone.
It’s still way too complex. To me, the correct answer is simple: tier it the way cable operators do. By the amount of speed you’re supposed to get. That’s not a perfect way, either, especially given how some complain about not getting their top speed. I’m sure there will be a lot more of those kind of complaints on mobile broadband.
It is, however, something non-technical people can wrap their head around and operators can easily differentiate. Without complicated “calculators.” Until data plans are based on something people can actually understand, the value is clearly understood, and the prices are more reasonable, people aren’t going to buy.