the future of the internet
motivation
before thinking about ‘the future of the internet’ let’s have a look
what the ‘internet’ actually is. the internet consists of two
low level protocols, that is: tcp and
udp and the most prominent protocol
ip. and many other protocols are building on this, as
http for instance. you are currently reading this blog page using http,
that means you are also connected to the internet.
if you want to be a part of the internet you need a internet address
- in short an ‘ip address’.** since most of use
are using ipv4 that means often ppl share one ip which is bad since you
can’t address some person sharing an ip with someone else directly.
Sharing usually means NAT** - network address translation - and
it also means that the internet does not work correctly. NAT is a
failure and was never meant to be in the original design of the
internet.
say you call your friend and he has no phone in his room - but one at
his mother’s room. usually you can’t call between [22,8] o’clock. the
same thing is true for nat - you usually can’t reach the other person
directly.
the only possible solution to this problem is ipv6.
however we fail to integrate it. we currently fail at a lot of things
(germany 2010):
- apply ipv6 a standard before others do it (and we
have to accept their policies & concepts)
- removing 24h disconnects for dsl
- creating balanced upload/download links
- instead of forcing providers to make dsl uptime 99.9%, we still
heavily rely on the telephone system which has in
comparison far better service guarantees (for instance dialing 112)
- have more applications using osi-layer 5 (session
layer) instead of binding a session to ip:tcp states
So ‘what was the
internet again’?
Actually the internet is much more than just what you can do with
your browser using http. Before looking at http in detail let’s have a
look at popular protocols which do NOT use http.
- the first thing which comes into my mind are all the network
implementations for games which most often use UDP and
in some rare cases use TCP. this is a very big class of
applications
- then we have SIP and similar implementations as
‘speech over ip’ in short VOIP. most internet providers tend to like
SIP. gamers often use TeamSpeak or Mumble but there is skype folks as
well. the most notable difference between all sip like implementations
and skype is that skype’s implementation bases on a p2p structure but
since it is closed source one can only speculate how chat and speech is
using this framework.
- then there are the chat clients in various flavors
ranging from irc & psyc and to IM protocols as xmpp(jabber), irc,
msn, aim & others. these chat clients - more often than not - have
also webinterfaces which let’s YOU access the service via http as well.
using http is a very interesting approach since it uses http for that
and http usually DOES NOT WORK WELL with such an service but we’ll have
a look at that later
- p2p filesharing which is (at least bandwidth wise)
the most popular internet service. i think about bittorrent (kademlia)
or other similar concepts (this does not include services like
rapidshare!).
- last but not least there is a concept known as ‘middle
ware’. for a very long time i did not understand the importance
of having a ‘middle ware’ when designing internet protocols.
corba and it various implementations for what one would
call a ‘middle ware’ and from the protocol designer’s point of view it
gives you a very high degree of object interoperability.
- when a game developer has to write network code usually the choice
tends to UDP since the laggy nature of TCP flow control can’t be ignored
when one uses a internet connection with lots of packet loss. however
UDP basically does not care for lost packages at all which kind of
‘overcompensates’ the problem one wants to solve. so most developers
extend UDP until all needed features are there. when the network code is
done and objects as players interact with other players via the network
(read ip here) the game developer basically has created his own ‘middle
ware’.
- the concept using a ‘middle ware’ is very easy to understand: you
don’t want to care what the network looks like you just know it’s there
but instead of writing parsers for tcp or udp streams with
‘object serialization’ and ‘object
marshaling’ one can just use an adapter in the programming
language of choice which handles all this. of course using custom types
here is most often not as easy as using core types as ‘int’, ‘double’,
‘float’ and ‘string’ but handled in most cases.
http problems
so after looking at the use case of ip let’s have a
look at http and what** problems http has and how they are
solved. the biggest issue using http is not the performance hit
introduced by html/xml and bandwidth loss on the other hand.
instead the biggest issue when using http is the fact that http
is a ‘polling’ only protocol** so a client might download a webpage as:
http://www.lastlog.de but then closes the connection. this works well
when you only want to look at webpages and if you are only interested in
client side events as mouseclicks & similar. but if you want to
implement an interactive chat using http it gets more complicated since
you would have to query the server every second for new messages from
other users (even if there was no new message at all). the server CANNOT
connect to your browser and tell you about ‘the new message’ right away
as it is done in all the IM protocols.
so there are some new concepts as spdy:// from
google [1] but there are other similar ideas as well. all lead to a next
generation http protocol with callback capabilities.
opera for instance features ‘a socket in the
browser’ which is a interesting concept. both concepts
transform the ‘user’ from a pure consumer into a
‘consumer/producer’.
in the first days most internet providers had ‘terms’ in their
‘terms and agreements’ which forced you to offer no
webservices with your private internet connection. the situation since
the internet was introduced has changed and providers now sell devices
as the fritz-box with DDNS clients for services as
dyndns.org preinstalled. this change needs to be
extended further as the classic pattern of ‘private’ and ‘commercial’
usage merge more every day. it gets hard to distinguish between the
service levels needed for either of the two scenarios. we need to
extend:
- service guarantees (uptime/bandwidth)
- extend upload (currently internet connections are pure consumer
connections)
- extended QoS guarantees where i think of ‘big bandwidth but
high latency’ vs. ‘small bandwidth with low
latency’ since the normal ‘telephone line’ will vanish
soon we need some way of backup line for emergencies of various
kinds
So after some history, what makes opera with websockets
revolutionary? It’s the ‘ease of use’ to put content online. You don’t
need a provider to host your stuff, just host it yourself! But there are
pitfals as upload bandwidth is small even with VDSL, i don’t know why…
my scale of guesses range from:
- ‘reducing costs since most users are consumers’ to
- forcing users to ‘shut up’ and to sell enterprise servers for much
money
there are many points for both sides and i don’t have much
background to make a good guess.
Some final thoughts:
- Currently I’m waiting for a revolution: the upcomming ‘ipv6’ could
make skype ‘vanish’ since traversing NAT is the biggest problem in
creating a ‘free clone’ of skype (this is also valid for SIP). also
filesharing as p2p will work much better.
- http will be extended with something like spdy:// very soon, which
will make ajax programming pattern much more powerful
- the normal browser will not host only some ajax scripts but
will also host parts of the database and programs nowadays
running on the webservers only and therefore turn into a local
webserver. the first step was ajax, the second step needed is spdy://
and the third step to make this work will be adapted software for
webservers to run in such a scenario. looking at the success of http is
that it is a easy to use protocol with working standards and it can pass
nearly every firewall. it basically hosts all kind of services. the next
big step will be to make the browser to look like a application. again -
the first step is something like google docs, the second step was making
it a tv with plugins as flash. the next revolution for example is
compiling c/c++ code in a way that it runs on top of the flash plugin
[2]. in the end the browser and any normal host application
merge…
- flash will vanish - since the biggest use case is
streaming videos and commercials, so guess what users will disable once
most videos are played using the in-browser player (which has also huge
performance improvements)?
links