Thursday, November 15, 2007

European Telecom Market Authority (ETMA aka EECMA) a threat to checks and balances in the EU

The proposed European Telecom Market Authority (aka EECMA or European Electronic Communications Market Authority) is an utter disaster for the democratic functioning of the EU and the balance of powers in the EU. It will:
  • not address the fundamental problem of an imbalance in power,
  • increase the lack of democratic oversight of telecommunications decisions at the EU level,
  • seriously devalue the role of national regulators ,
  • ruin the quality of rules and regulations and finally,
  • there was no demand for it.
I will try to sketch the context and the formal situation first and than explain how it works in practice, then why the new European Telecom Market Authority will have the negative effects that I suspect and lastly what I think needs to be done.

As you may know, the EU proposed changes to the current telecommunications regulatory framework. This framework is the basis for the telecommunication laws in each of the EU member-states. The current framework is quite good, certainly compared to eg. the US telecommunication laws, but a review has been held and some proposals have come from the Commission. Most of it is evolutionary and not revolutionary. There is however one controversial proposal, which is the European Telecom Market Authority.

The proposal
The EC propose to establish an organization of about a 120 people reporting to the European Parliament and tasked with:

  • "ensuring that the 27 national regulators work as an efficient team on the basis of common guiding principles;
  • delivering opinions and assisting in preparing single market measures of the Commission for the telecoms sector;
  • improving the accessibility of telecoms services and equipment for users with disabilities;
  • monitoring closely the use of the single European emergency phone number, 112, and identifying remaining obstacles;
  • facilitating cross-border EU services in relation to rights-of-use for scarce resources such as spectrum and numbers, and enabling operators wishing to do so to use a single European area code for their services;
  • addressing network and information security issues."
The Commission is of the opinion that we need such an organization, since the national regulators (NRA) in the member states haven't all been swift, consistent and effective. This is undeniably the case and some in cases nations have just been downright horrible and wrong, like Poland, where the deputy minister was head of the independent regulator and Germany with its Regulatory Holiday for its poor incumbent Deutsche Telekom, which is building something that uses VDSL2, but is a service instead of a network and therefore according to the German regulator not subject to regulation. However, member states getting it wrong doesn't mean the Commission/ETMA should take over and even less that it would be good at it.

The Current Situation

A longer description can be found here. Currently when a national telecoms regulator issues a ruling it has to notify this with the Commission and the Commission can then raise "Serious Doubts" or even Veto a ruling. The Commission has nothing to say on the actual remedies, though it would love so. Now when serious doubts are raised the NRA will have to consider those and take them into account into its ruling, if the Commission is still not happy, than it can veto the measure. If the NRA is unhappy, it can go the European Court of Justice and wait 4 years for a solution. Also the European Regulators Group (ERG) can be asked for an opinion to advice in a dispute between NRA and Commission.

Reality has a tendency to be different that what you read in the law. I've taken to game theory quite alot lately and laws are the rules of the game, but they don't explain how the game is played in reality. The current game is unbalanced and completely tilted towards the Commission. What happens is the following:
  • The Commission comes asking what your proposed regulations are. It will then informally and verbally tell you what is wrong and that you need to fix it. At that moment civil servants NRA's and ministries internally will have a serious problem already. Directors want something fixed, ministers are wondering what parliament will think.
  • The Commission will also want to see the proposed remedies, regardless of the fact that they are not allowed to rule on it. If they don't like the remedies, they will not like the ruling. <\
  • If the Commission raises 'Serious Doubts', the press will be informed. There might be an official point of view, but informally it's a spin war. The Commission answers to no one. The NRA and national government will have to fight of the press and parliament. The press figures that the government is wrong from the get go and in parliament the opposition is having a field day. At this point most political figures will buckle and give in to the Commission.
  • If the Commission vetoes the governments decision than the government is in big problems. The minister clearly has failed, the NRA is incompetent. This will result in debates in parliament and again politicians will cave in.
  • The internal discussions within the Commission on the subject can reflect national and EU-level political discussions to such an extend that it is hard to distinguish between reason and politics.
  • The ERG can be asked for an opinion, but the Commission is not bound by it and it caries little political weight. It has a tendency to side with the Commission when the Commission is very clearly correct (eg. Germany Regulierungs Ferien) and only sometimes against the Commission, when it is clear the Commission is wrong. In the former case the Commission will cite everywhere that it is right and cite the opinion widely. In the latter case the Commission will brush of those incompetent NRA's.
  • A government will not have any place to go until a final ruling by the Commission. When there is a ruling it can stand in line at the European Court of Justice, which is a group of intelligent but very slow judges. It currently takes 4 years to get a ruling.
So there you have it. A NRA can be right, but in order to be acknowledged as right it takes 4 years. At which point the ruling is irrelevant. In the mean time there is a stale mate or worse the country issues the ruling according to the decision of the Commission and just continues the lawsuit out of principle. So should you wonder why NRA's and Commission often agree, there you have it.

The fundamental problem

The fundamental problem of telecoms regulation in the EU is not incompetence at the national regulator as the EC claims, its the inbalance in power, partially caused by the long time it takes to appeal a decision of a nation and the difference in context a NRA operates in compared to the EC. The ETMA will not solve this problem, but only will make it worse.

ETMA will not be under the oversight of the Commission, but will report to Parliament. There is however no mechanism to prevent the Commission meddling heavily into the operations of ETMA. ETMA can now deflect any direct attention away from the Commission, while the Commission can still run it from a distance. This gives the EC more power in a subtle way. It will not improve the position of the nations, because for an appeal they still can go to the ECJ in Luxembourg. So you got a perpetrator, a fall guy and forever to wait for justice.

Increasing Democratic oversight

As said in the previous paragraph, ETMA will deflect attention away from the Commission, giving the Commission more power in a more subtle way.Parliament is toothless vehicle, because it cannot dictate what ETMA will do, just oversee that it is doing something. The European Parliament just isn't equipped to deal with direct intervention of the Commission in ETMA. Normal ways of indirect democratic oversight of the Commission is also out of the question. These ways are normally barterring and bribing, because the Commission needs the Member States on other issues. The Commission is now impervious to barterring and bribing, because it can deflect any just criticism towards ETMA. Now barterring and bribing are already bad, but this doesn't add oversight, just an extra layer of confusion.

Devalue the role of national regulators
The Commission will use ETMA to lay down dictates of how to regulate. It will require each nation to do exactly the same as elsewhere, regardless of the local situation. Yes the EC will tell you differently, but the Commissions actions against The Netherlands on the cable sector will show you differently. The Dutch cable sector is different than anywere else because of having 98% homes passed and 95% of the people subscribed to analogue and/or digital tv. Knowing this, national regulators will be wiser than to "Think Different". They will auto-conform without ever considering a different option.

Injure the quality of rules and regulations
When conformance becomes the rule, regulators will loose the appetite to properly research their national markets and identify proper actions in line with the national situation. This will make both the Commission and the NRA complacent. The Commission will argue that it's always right because everybody follows it, through ETMA of course. The NRA's will go grow complacent by just copying ETMA.

Lack of demand for ETMA
There is no clear demand for establishing an ETMA with 120 people. These people are going to do stuff already done in the Commission and at eg. ENISA at the moment. Chances are that it will be all new people, who will duplicate the Commission's work. 120 people doing nothing will want to do something and that will lead to more meddling, bright ideas, windows dressing, useless reports and infighting.

A solution is not easy, but would need to consist of these elements:
- Quick dispute resoluton at the ECJ (6 months). This will make the ECJ relevant and remove time as a weapon from the EC.
- Strengthening of the ERG. If a majority of the ERG agrees with the Commission, than the Commission must be right and the other way round. It's hard for any party to broker a deal with 14 nations to get a favorable ERG ruling.
- Strengthening ENISA to tackle security problems, where they are. The problem is not in the network and the cause is not the telco. So establishing a CTO at ETMA will not help.
- Accessibillity and 112 emergency are already done at the Commission. They are not purely problems of the IT sector and could be handled through normal channels of the Commission.
- Cross Border problems should be dealt with by a proposal of the Commission and a decision by the Member states and not by ETMA

In the coming days I will work through the papers more. I hope to have a look at what a provider of electronic communications networks and services is, Net Neutrality and functional separation.

Update November 21st 2007: For a moment I thought I had misjudged the Commission and that ETMA would actually have something real to say. I thought the 27 regulators would have full voting power and that the Commission would be subject to the opinion of the Board of Regulators. But I've read the regulation and it turns out the Board of Regulators can only issue a non-binding opinion, much like the ERG now. So they are toothless fluffy paper tigers, no chance that he Commission might hurt themselves on the Board of Regulators. Oh well, it's good to know the world is still a sphere and pigs don't fly.

Wednesday, November 14, 2007

Nokia E51 Released yesterday, NO on November 22nd or December 6th?!?!?!

Well, what can I say... I'm sad. November 12th came and went and there has been no official release of the E51 neither in The Netherlands, nor in the UK. It seems however that you can buy it on Ebay and somewhere in Europe. The guy behind has his already and blogs about it.The people of Symbian Review have a review up and love it.

German sites, like Conrad, were reporting that they had it in stock, but now they are reporting that it will be available on the 22 of november. has moved the date up to December 6th. My mr. Fix-it, whose got a good reputation of getting the cool stuff when it's still hot, hasn't gotten his hands on it either. He still has it in backorder. I haven't found a Nokia site yet that sais that it is shipping. There are no press releases either. Nokia Europe has moved the phone to its list of available phones and off the list of phones to be released, but Nokia Germany and the Netherlands both have it listed as "available soon". Most Nokia sites do show a nice promo for the phone, but I don't want pretty pictures, I want press releases and shipped models.

Meanwhile it seems that Hungarians can buy the E51 on the black market already... I interpret the Hungarian euphemisms as: A container full of these somehow got lost and ended up in our shop... that's not stealing is it :-)

Monday, November 12, 2007

FTTH, CCTV and a safer society go hand in hand

Brilliant article by The Register: Residents of Shoreditch in the UK got access to the footage of CCTV surveilance camera's through digital TV. The pictures were in a grainy resolution as to not allow people to identify individuals, so privacy was kind of protected. The results were: IT'S MORE POPULAR THAN BIG BROTHER! Better still, people wanted more of it and in a higher resolution. The focus group response was: "Focus group feedback indicates the CCTV is helping address fear of crime and... generating major new community vigilance resource."

Now imagine, FTTH everywhere and grannies are working from their home as CCTV camera operators. We'll get those nice, spiffy AXIS cams installed, that work in the daylight and nighttime and can deliver HDTV quality. You could install it as a scheme where those watching get a bonus for every crime they report, or as a new work from home scheme. Whole groups of inactives could be crowdsourced. Actually you could get two or three people watching the same scene independently. If they don't know the others who are watching it could be a great scheme to keep CCTV camera operators honest. If two are reporting something happening and number three isn't that person not doing his job.

At high def 5mbit/s or more this will require some nice FTTH or VDSL2 type of connections. (ofcourse there are ways of conserving bandwidth until such a moment when the operator feels it necessary to get a good luck) The traffic is therefore best kept locally, but that should be no problem, as locally there should be no lack of bandwidth in a VDSL2 world. (Note to the Brittish: this doesn't include you. BT keeps you at ADSL2+ which just isn't good enough to hook all those camera's up to at HDTV and most of you will not be able to watch it in HDTV, since you live too far from the exchange)

Combine it with Google maps and let people annotate the events. Even better, store all the information readily available and searchable for the public. Let them annotate it and they will make everything even more clear. Pretty soon you'll have a complete record of every car that drove by a road. A description of every person that ever glanced at the cam. Let them combine their own pictures and you'll have a transparant society! What the German Democratic Republic (commies) never achieved, we can achieve by using camera's and grannies!

Now just think of it, the possibilities are endless and the great thing is peoples curiosity/loneliness will bring Big Brother upon us, without as much as a complaint. Onwards to the Transparent Society and Big Brother be dammed. If you've got nothing to hide, you've go nothing against this idea! (/sarcasm)

Wednesday, November 07, 2007

There is no economic basis for QoS

This was the outline of a paper I planned to write, but for which I just have too little time to get it finished. My main point is that QoS mechanisms in a network are a bad idea (tm) This is generally examined from a technical point of view. The arguments are either generally that we tried it and it didn't work. There is little research on that evaluates the economical side. The little research that there is, generally argues that QoS mechanisms could work if all parties in a communication chain just work together and the reason they don't is because of the lack of incentives. I belief there are several reasons why QoS can't work and why it is a failure of logic.


The internet is broken, so we’re told by scientists and standardization bodies. We need a new internet and research at Stanford, Berkeley, Fraunhofer Institute and various European Union programs will fix it. One of the main points of criticism is the internet’s lack of Quality of Service mechanisms to shape and prioritize traffic and to make sure that unimportant traffic doesn’t hurt unimportant traffic. All this in order to give the end-user an optimal Quality of Experience. The ITU has made end-to-end Quality of Service a central element of the design of its specifications for a Next Generation Network.

There is a great deal of attention in academic research on telecommunications networks for Quality of Service mechanisms. It is often stated that without such mechanisms telecommunications networks will not be able to deliver a stable and reliable service. Both from the technical side as from the economical side there is a considerable body of literature on how these mechanisms will work out in the network and in the business models that sustain the network. In order to realize QoS we invest large sums of money in research programs to fix the dreaded problem. Given the amount of scientific papers and research proposals mentioning the absense of QoS as a major problem for the roll-out of all kinds of advanced and mission critical services, how could we not. Everybody says it's important, so it must be important. Except for one minor detail, despite over twenty years of research and various standards and implementations of standards, nobody is using it.

The idea of QoS mechanisms as being essential to the stability and reliability of the network has been at the heart of the Net Neutrality debate. It also rears it head in debates on how to make a sustainable investment in networks and services. The notion of QoS mechanisms has therefore passed the realm of the purely technical and academic and entered policy debates, where policy makers will have to value the various claims.

This paper will examine in a multidisciplinary way the basis for QoS-mechanisms in telecommunications networks from both a network engineering as an economical point of view. Quality of Experience for the end-user is the end-goal of any network architecture and that is where QoS-mechanisms are supposed to deliver. We will show that the use of QoS-mechanisms to deliver QoE is bound to result in failure right from the start, since QoS through shaping and prioritizing is a logically and conceptually flawed concept. It’s a holy grail and a pipedream. These mechanisms cannot work and therefore building either networks, business models or policy on it will result in failure. There is however a simple solution to QoS-problems and that is to over-engineer the network and all the active equipment (servers, routers etc)

What is Quality of Service and Quality of Experience?

Just look it up on the wikipedia. Also look up jitter and lag etc.

What kind of QoS mechanisms are there?

These mechanisms in general take three forms:

- Prioritizing systems, that let packets move ahead of the que based on how high their priority bit is. (like sirens and lights on an ambulance)

- Bandwidth Reservation systems, that guarantee a certain amount of bandwidth over (part of) a link between two points. (like the telephone network that reserved a line between two points)

- QoS enabled routing systems: Routing systems that try to route traffic on knowledge of the state of the network. (like a driver learning of a traffic jam on the route to work and therefore taking another route).

These systems have seen various implementations and all have failed. There are alot of explanations in literature that explain why QoS is not a success. They can be divided into three different classes:

- it's failure of the previous technology, but we will think up a new one that will get it right

- it's failure of economy, bandwidth is too cheap, implementation is too hard (but this will all change, just you wait and see.)

- it's failure of timing, currently we don't need it, because nobody uses the internet for business critical stuff, but the status quo has to change or we cannot do telesurgery etc.

There is very little literature available on whether QoS is actually necessary and whether QoS is actually possible. If all these mechanisms have any chance of working at all.

Many engineers that design the protocols and networks that have build the internet explicitely and implicitely accept that QoS-mechanisms will not work in actual networks. Most sensible engineers don't even want to get into a debate anymore about it.

Actual implementations of Quality of Service Mechanisms

There are currently several QoS mechanisms standardized for use with the Internet protocol.

Diffserv, Intserv, RSVP, MPLS


The conceptual errors that underlie the failure in implementation of Qos mechanisms can be either technical or economical. Technical errors are those errors that make it impossible to design a technical system that answers to all the demands of a QoS-mechanism for it to be technically functional, stable and reliable. Economical errors are those errors that make it impossible to properly implement and operate a network with QoS-mechanisms. The economical errors and technical errors feed into eachother, strengthening each others effects.

Technical errors:

  • Scarcity in the network is a layer 1 problem. QoS mechanisms are operating in layer 2 to 7. We’re trying to stuff more bits into a pipe than can properly fit. Like trying to put marbles through a funnel. Or put differently, trying to fix a layer 1 problem in layer two or three, by making assumptions on layer 7 traffic and on the real world making use of it.

  • QoS routing is NP hard

  • QoS tries to make the pipe more efficient to allow for more traffic. This only works when the pipe is almost full, but not when it’s fully full. In a dynamic system the difference between empty, almost full and fully full is a couple of percentage points. This leaves very little room to manoeuvre

  • QoS prioritization works on the switches that are in between the two users. On a modern system the time advantage that can be achieved by prioritizing a packet through a switch is x millionth of a second. This is less than x% of the one way time of a route of 100km. We’re trying to solve the lines problem in the switch.

  • Qos only works if the switches can derive an order in which to treat applications. If all streams have top-priority there is no way to determine which ones should get priority over other ones.

  • Bandwidth reservation mechanisms are binary. There is capacity that can be reserved, or there is no capacity that can be reserved. This regardless of whether there is capacity available.

  • It may be a straw that breaks the camels back, but there is a lot more weight wearing it down. Removing the straw or one other object might save the back, but the camel remains heavily burdened. Same in networks. Both big and small flows can break a network. It’s the total that counts.

Economical errors:

  • A QoS system will have to weigh the demands of all users in order to weigh the highest utility for all. This will require an insight into the utility function of each user and an overarching utility function to weigh the utilities of all users against each other.

  • There is an implicit assumption that both sender and receiver will value a stream equally high. In any communication, there are senders and receivers. Both have a value for that communication and a value for other communications flows on the same connection. When watching a movie online, the company that broadcasts the movie values the QoE of its customer very highly. It doesn't want the customer to receive a jagged and jittery movie. The customer however is not only watching a movie, but might also be expecting an important phone call or communicating otherwise.

  • QoS works in a static setting (see technical). However the market is dynamic if its healthy. This will reflect itself in the network as one the main platforms over which market forces exert themselves. One cannot assume a static situation for a QoS mechanism if the data flows will follow market dynamics and grow with growth in population and prosperity (when bandwidth usage decreases, there is no need for QoS mechanisms)

  • (variation on above) If market is stable (no growth or declining) there is no reason to ration traffic. If traffic is declining the use of QoS mechanisms after a while is unnescessary. if it is growing than after a foreseeable period there is too much traffic for QoS mechanisms to add QoE.
  • In many business cases surrounding QoS mechanisms there is an assumption that QoS enabled traffic that has been paid for, has a higher value to the user than data that has not been paid for. This sounds logical from an economical point of view if money is an adequate proxy. However it isn’t. Compare a VoIP call that clashes with a pay per view movie. If the VoIP call is about an important subject (birth of child) than it has priority for the receiver, regardless of the QoS level paid for.

Overengineer the network, so you don't have a situation where QoS mechanisms are appropriate
On end-user connections let the end-user prioritize the traffic to and from him/her. It's the only one that has an accurate view of its utillity function.

Wednesday, October 31, 2007

Fight VAT-Crime: outsource your billing to the tax man!

The Dutch financial newspaper "Financieel Dagblad" reports on a scheme to fight VAT-fraud. The idea is quite simple. Let's use electronic billing in the EU and send a copy of every bill send in the EU towards the tax-service! This idea by mr. Wilbert Nieuwenhuizen sounds quite simple and with the total value of VAT-fraud exceding €100 billion it will probably be worth every penny that the tax-services invest in it. Electronic Billing in itself will save on average €25 per bill a copy to the tax man is easily made, so why should companies object.

Now as a telecoms and ICT guy, I like technical solutions to hard problems, but this one scares me. This could mean that the taxman pretty soon knows of any and all transactions in the country. On top of knowing whether the VAT paid and declared all matches up, they also know of any and all trades done in the country. Now I know that they can already get access to any and all trades that are on the books, by making a friendly phone call to a CFO, but this goes much further. What little privacy we have would be lost too. I would say instant Big Brother Award nomination!

So I propose the following. If the taxman knows al my transactions, then it should also do my books. Give me a monthly overview of how I am doing and if I run a business, how my business is doing. They know more than the accountant by now, so they can sign of on the books too. It can give me instant deductions on taxes when I qualify for them and give me an automatic extension of my payment term if my cash flow is not adequate. Than they are really useful and I'm willing to relinquish my privacy.

Sunday, October 28, 2007

Updated: Release date Nokia E51 is November 12th

I just ordered Nokia's new and unreleased E51, because it gave me a whole lotta goodies for a 300 euro price-tag (through a friend). Unfortunately it was unknown when it would be released, except for Q4 2007, which could be the day after Christmas. But fortunately Amazon UK is now reporting that it will be available November 12th! (The same day as the iPhone is released in Europe (correction: UK only I hear, France on the 29th) Great! I really want to fiddle with the VoIP functionality and Wifi capability. Also it supports almost all e-mail systems right from the get go except it seems for Gmail, which is a pity. Iphone would have been cute too.. but Apples policy makes it hard to get without a subscription.

Update: It's available in Germany! Seems its been available for two weeks there already according to My Google-powers must have been very week not being able to find it. Though strange thing is that says that it is bald erhaltlich, or in English: Available soon. Great thing this globalisation/e-commerce/weightless economy. Stuff is on the market even before Nokia gets a word of it.

Friday, October 26, 2007

The day the routers died

Well Slashdot doesn't seem to pick it up. But this is just too funny and serious to just leave it unmentioned. So this is what I send them. I should go to RIPE again.

"The RIPE 55 meeting has just concluded. There was much debate on what to do on the imminent depletion of the unallocated IPv4 pool in 2010. We could do nothing or we could create a market place and facilitate transfer of IP-adresses, but it's all a train wreck waiting to happen. This is best shown however by a beautiful song "The day the routers died" also available on Youtube written and performed by Gary Feldman. So please all upgrade to IPv6 soon, or else you will not get 40Gbit/s to your mother."

Wednesday, October 24, 2007

Update: Using "evil" dataretention for emergency (e911 and 112) services good

At RIPE 55 a presentation was given by Alexander Mayrhofer on internet based emergency calls. One of the main problems they need to deal with is to get location data with IP-adresses. In the traditional telephone world we faked knowing where the caller of 112/911 emergency was, by equating it with the address in the telephone book. For years this was kind of sufficient, except when calling from an outside branch office, than you would notice the firemen arriving at the head office :-) Mobile made the world more difficult, but that is solvable by an antenna register and equipping phones with GPS. VoIP in its nomadic form is a different beast all together. The presentation is quite clear on all the complications.

This problem of tying IP adresses to locations is also faced by law enforcement when hunting down terrorists, child pornographers, serious crime and cyber criminals (the four horseman of the apocalypse). To aid law enforcement in this quest, the EU has written a data retention directive that requires telecommunications networks, to retain who was given what IP-address and at what location. The exact specifics vary between countries and interpretations of the directive. So lets bring these two together, give emergency services access to the up to date data retention databases and presto one of the problems (partially) solved. For ISP's it saves building two systems. For law enforcement it saves accessing two system and gets as an extra bonus that an ISP will be more willing to improve the quality of the database.

Update October 29th, 2007: Continuous improvement picked up on this article. He thinks that getting the information right for e911 will help the data retention people and not the other way round. I do think it is the other way round mostly. Getting data retention right will help in those occasions when someone calls a e911/112 emergency number. With data retention the government wants to know when and where you used your mobile/VoIP. They will need to get this right anyways. A special subsection is 112, so getting access to the special subsection sounds more logical IMHO than the other way round

Friday, September 21, 2007

A simple proposal for mobile roaming charges.

Update: this would work great if combined with an iPhone/Android App, that would do the selection of networks for you. Either the app maker or the European commission could have a matrix of all costs charged by host mobile networks for mobiles roaming on their network

The EU has finally succeeded in capping the roaming charges that customers have to pay when they use their mobile in another country. Ms. Reding’s proposal was quite simple: a cap on wholesale per minute charges and a cap on retail per minute charges. The caps for the coming years are:
Eurotariff maximum
Summer 2007
Summer 2008
Summer 2009
Mobile calls made abroad
49 cents
46 cents
43 cents
Mobile calls received abroad
24 cents
22 cents
19 cents
This is nice and an improvement to the current situation. However it will not lead to a competitive European market for roaming. For the market to become competitive it would be necessary:
  1. The customers choice of roaming network will have a direct impact on the price paid by the customer.
  2. When a destination network lowers its roaming charges this is immediately felt by the end-customer
In reality neither is the case:
Ad 1. When a customer uses a network abroad, it is often charged two tariffs; one low tariff for the preferential network and one high tariff for the 2-4 other mobile networks in the country. The preferential network often is a sister company of the users home mobile phone company. If a customer decides to use a different network than the preferential network, this will only result in higher charges, since the retail price has been fixed by the home network. The customers choice is limited by its home operator.
Ad 2. When a destination network decides to lower its roaming charges, out of the kindness of its own heart, this doesn’t oblige the home network to charge the customer a different retail price. As shown in the table, there is nothing wrong with charging 46 cents for making a call even if the destination network only charges 10 cents. Not only does the destination network receive less, it also hasn’t become more attractive for the consumer, so there is no way it can make up for its losses in margin by increasing volume.

A simple proposal to end this mess (so it will probably never happen)
Roaming in another country is technically and economically quite simple. There is this person who is not a customer of the destination network, but is a customer of another network, that wants to use the destination network to place a call. For the destination network setting up a call is the same as it does for its own customers, except for one thing: Authentication. It will have to authenticate the roaming caller on its network and see if it has a contractual relationship with the home network of the roaming caller. If it has such a contract, the roaming caller’s network can be billed and therefore the caller can place (or receive a call).
The ONLY thing a home network does for a destination network when one of the home network’s customers roams is to do the authentication and the billing. It doesn’t carry traffic. It is not involved in the routing of the traffic. It acts just like a credit card company. It authenticates that the customer can make the purchase (place or receive a call) and it bills the customer the price (in the process charging the shop a modest fee of a few percent). This is beneficial for the customer, it can buy (place a call anywhere) and it is beneficial for the shop owner (destination network) who knows it will get paid and doesn’t have any risk of the customer not paying.

When applied to roaming a better model would be to allow the home network to only charge a small fee for authentication and billing (2-5%) to the destination network. This way it’s billing and authentication costs are paid for and it’s adequately compensated for its troubles. (Visa has made quite a living on 2%-5% margins) The destination network would then be free to charge the consumer a charge it deems fit, whether it’s a receiving call, outgoing local call, international call etc. When it wants to lower its prices to attract more customers it can do so. The price conscious consumer will be able to switch.
Type of call
Network A
Network B
Local call
25 cents
15 cents
International call
40 cents
25 cents
Receiving call
20 cents
22 cents
10 cents
17 cents
Internet data
1 euro/Mbyte
20 cents/Mbyte
The effect for the consumer would be, that when arriving abroad he/she will be greeted by billboards and SMS-messages of the large networks explaining their tariffs for placing and receiving calls, sending and receiving SMS and even using mobile internet. There will be competition because the network that charges least is most likely to get the most customers.
A smart consumer will even change networks to get the lowest price for each action be it an outgoing local call, incoming call, international call, a call home etc. A smart customer will use Network B for local and international calls, but use A for SMS and receiving a call. (A very smart consumer would even compare the prices abroad to the prices of calling nationally. Think of it , if in France a network would only charge 3 cents per minute to call to a fixed line in Germany, a German might be cheaper off using a French network to call a German fixed line when being in Germany if in Germany this costs 8 cents a minute)
This will promote genuine competition between networks. All of them will want to get some of the revenue of the visiting customers. If there is a network C, it could decide to offer the lowest price for all types of calls and data. When MVNO’s would be allowed to offer roaming to visiting tourists and business people as well, prices certainly would drop. For MVNO’s it’s the simplest way of running a network. No contracts with individual end-users, no need for hand set subsidies, no risk of non-payment, contract disputes etc, just a contract with the home network is necessary.
From a regulatory point of view this idea would be great. It wouldn’t require any kind of regulation in a competitive market. The consumer is free to choose from multiple providers that can all independently set their prices. There is no need for a cap. It’s only necessary to be vigilant for market power and cartels, which is all in a days work. And they should watch out for home networks charging higher percentages, or destination networks posting wrong prices and/or weird schemes. It get’s better. By requiring the home network to only charge what the destination network is charging them, it becomes possible for citizens of the EU to enjoy low tariffs globally. When a Turkish operator is charging $1 per minute, that plus 5% will be the retail price for the end-user. If they have the option to choose multiple operators in Turkey, they can choose the operator that charges the least.
Would this scheme be implemented anytime soon? No of course not! It would promote competition and declining revenues and profit margins. One could expect the whole industry to be heavily against the idea. But it’s great thinking up such an idea.

Update October 29, 2007: Documents obtained by The Times show that the UK government is heavily opposed to getting the consumer a good deal in mobile roaming. The UK governement was giving the operators minute by minute updates on the state of the negotiations in the EU Council, including such remarks as "UK not happy bunnies" when the proposal was geared too much towards the consumers. All in all this shows that an idea like the one above will never see the light in real life.

Monday, August 20, 2007

How to compete with BT in the Openreach model?

Comments on my previous post and the whole debate about BBC iPlayer have got me thinking. According to Ian Wild of Plusnet, see his comments, the amount of money that a Wholesale Broadband Access (IPstream)-provider will need to pay for backhaul is £180 - £200 per Mbps per month. The use of PPPoA also makes that you can't keep local traffic local and keep it off the backhaul. This can get very expensive very fast, since traffic per customer will grow 50% per year, minimum. (To get an idea if you need to budget 100kbps traffic per customer for peak times at £20, next year it will be £30 and the year after it will be £45. (Unless the regulator regularly pushes the prices down). So this is a no-win-situation for the ISP's.

What I am wondering now: Under what conditions is BT charged in area's where there is no ULL available? Is Openreach charging BT the fee they would charge the Wholesale Broadband Access providers? If so, than why doesn't BT complain about the iPlayer seriously hurting margins? Or is it a ULL provider everywhere in the UK and doesn't it feel the pain on its backbone? Doesn't it need to pay the high backhaul charges. Or better yet part BT's backhaul is paid for by the rising charges for the backhaul of the WBA-providers.

Maybe somebody can explain things to me. There might very well be no conspiracy here. :-) And though it might explain the position of the likes of Tiscali, it still doesn't put them in the right. (It might put Ofcom on the spot!)

Wednesday, August 15, 2007

BBC's iPlayer as the posterchild for net neutrality

It's interesting to see how in the UK some of the lesser ISP's (Tiscali and their lot) have complained in the press about the public broadcasting behemoth BBC and its iPlayer. iPlayer is the BBC's attempt to copy the Dutch public broadcasters success of "Uitzending Gemist" (=missed my program!). It does this by using a Peer to Peer program Kontiki. The ISP's claim that this is at their expense. IPdev-Blog and Telebusilis have analyzed this in some detail.

Jeremy Penston of IPdev has analyzed very well why ISP's won't invest themselves in new networks and network expansions. The process in short is one of mutually assured destruction. If two companies build the same network, they create an over supply of network connections and bandwidth in the market. They will end up in a price war where neither can bail out and both will go bankrupt in the end. (even if one of them wins the first round, the loosing network can be revived from bankruptcy at marginal cost and start the second price war) The solution seems to be a regional or national public infrastructure. I agree with his ideas and hope to publish a paper along those lines soon.

However both Telebusilis and IPdev argue that the content creators should finance in one way or the other the build-out of extra capacity in the network. They argue it is not fair for the BBC to come up with a new service that taxes the networks of ISP's in Britain (up to 67pence per hour of viewing). I couldn't disagree more with them. I think it's only the ISP and it's customers that should do so. It's the end-user that creates the costs and it's there the costs should lie.

We live in great times, on a daily basis people al around the net invent new high bandwidth services to use over the internet. I'm watching my 3 day old cousin in a hospital on a high def webcam. You can watch live concerts at Fabchannel. People dress up in Second Life. In Twente security companies watch their customers premises using dedicated light paths. Every tv-channel and production company is looking into the on-demand opportunity. These new ideas have ever higher bandwidth demands.

In order to minimize the costs for content producers there are several strategies. Bill Norton of Equinix has made a very good analysis of the costs of video distribution over the internet. His analysis shows that using a Peer to Peer model (like the iPlayer) is the most cost effective version for the content provider. Or as Cringely paraphrases it:

Norton's analysis, which appears to me to be well thought-out, concludes that P2P is vastly cheaper than any of the other approaches. He concludes that distributing a 1.5 gigabyte movie over the Internet in high volume will cost $0.20 using the current transit model (a single huge distribution server), cost $0.24 using an edge-caching CDN like Akamai, cost $0.17 with a homemade CDN like I used last season to distribute NerdTV, or cost $0.0018 to distribute using P2P. That makes P2P 35 times cheaper than any of the alternate approaches. And (...) Norton further makes the point that none of these distribution models does anything to soften the blow on the ISP. CDNs in particular cost more -- that more being revenue to the CDN -- yet do nothing for the ISP.

Well the BBC could also do the calculations and came up with the advised solution. Which might actually be a better solution for ISP's as well mind you. This not often mentioned, but a well designed p2p-protocol keeps local traffic local. So if your neighbour wants to watch a movie that you so happen to have on your pc in an ideal world he would not need to burden the backhaul-links from your town to the main switch office, but keep everything local. This relieves the network of the ISP from heavy backhaul traffic. Just imagine if an entire town would be streaming from the servers of the BBC. At 1 megabit a town with 10.000 parallel streams would be hitting 10Gbit/s on the backhaul. This way the ISP can save on its backhaul and also on its interconnects with eg the BBC. (How perfect the world of p2p protocols is can be seen at IPdev here and here)

So why shouldn't the bandwidth hogs be paying for their bandwidth? The BBC has enough money and they do pay for satelite capacity, so why should they get away for free. Well, the BBC isn't they only one designing high bandwidth services, as said it's everybody. All those new services mentioned contribute to the networks creaking under heavy loads. Remote security cams, baby cams, people with no First Life. All of them break the network. Even normal surfing the web helps. The question about who pays than quickly becomes a question of who can we extort money most easy from. Well auntie Beeb is old and wealthy, so it might be easy to beat her up for her pension. It's much harder for a UK ISP to do the same from a Dutch hospital, or security company or a Japanese public broadcaster, though they might contribute as much to the demise of individual links as the BBC does. (Think of it as cars on the road, all the cars contribute to congestion, foreign and domestic, business or pleasure). So what you get is that the costs are disproportionally allocated to those companies that are easiest taxed.

Another reason against using a taxation of content providers is that the revenue stream will be so attractive to improve the competitiveness of the ISP that there is no reason to assume the money will go into network upgrades. They might just as well go in more advertising or lowering prices. Even better, there is no reason to expect the taxation to cease once the network has increased its capacity. Like so many taxes they tend to linger long after they´ve done their job. For the economist, it's kind of like a terminating monopoly and will require equal amounts of regulation.

A third reason is that imposing a `Save the ISP`-tax is detrimental to innovation. Think of it, would you want to father the new Skype if the bandwidth tax bill ends up on your doorstep? Ofcourse not. That would be ridiculous.

By now people will be confused. It must be expensive to get a new network that can handle this amount of traffic they think. But again they are wrong. You can get a nationwide fiber to the home network for roughly 35 euro per house per month (or an investment of between 1000 and 2000 euro). For most countries that is signficantly less than their investments in roads and it is equal to what it would cost now to build an electricity network from scratch. Yes there are upfront costs, but it would last 50 years, allow for all kinds of innovations etc. If the market doesn't provide this, you have a market imperfection that might require limited government intervention in the civil engineering part of the physical network, if the benefits outweigh the costs eg. Stokab in Sweden. But there are billionaires around willing to do cherry picking in FTTH networks (Dik Wessels with Reggefiber). And there are even smart incumbents upgrading there networks to VDSL2 (KPN, Deutsche Telekom) or FTTH (Verizon) and new entrants (Free). Though we are still a bit away from universal 1 gigabit home connections for 35 a month.

35 euro per month buys you the fiber network (less if we fix it partially with government money). Interestingly it doesn't matter whether you use this at 1 mbit/s or 100 mbit/s or even a gigabit. It all costs exactly the same. Different speeds of your ADSL line eg 8mbit or 1mbit are only a way of price differentiation, but have nothing to do with sending more bits over the network being more expensive. It doesn't get you the traffic yet. International and interregional traffic costs money. The way this is dealt with in many countries is with monthly traffic caps eg of 40 Gigabyte and if you use more you pay more or there is an acceptable use policy. The way this could be fixed in the future is that you have a gigabit line to your house and a terabyte of traffic per month of interregional/international traffic (local traffic is free). If you go over, you pay more.

Now we arrive at the problem with high bandwidth applications like P2P applications or babycams. The way Joost and BBC's iPlayer work is that they exchange traffic even when users use it. Users actually have no way of knowing or limiting the amount of traffic it uses. With a babycam you could calculate it, but it's not intuitive. This should be fixed. A user should know how much costs they are incurring by using innovative appplicatons. They can then limit their usage according to their needs. It will also push ISP's to increase the monthly traffic cap to offer their customers more than the competitor. ISP's can now extract the money from their customers based on the amount of bits and not on the type of application or which granny to beat up. If a customer wants to use more they pay the ISP and they get the bits, regardless what they use them for.

Alright this seems too easy. Networks get paid for by the customer and it seems like content providers are getting a free ride on the network innovation train. The content providers have all this income from advertising and they should share... shouldn't they? There are several arguments against this. First of all, it's highly questionable if there really is so much money in advertising. The total turnover of the Dutch advertising industry is 6 billion and this supports Ten TV-channels, around 10 national newspapers and a couple of hundred magazines, thousands of websites etc. Some of it doesn't even support content, like billboards and classifieds systems like Monsterboard. (In comparison the mobile telecoms sector makes 6 billion a year too with 4 networks) Secondly efficiency in distribution leaves room for innovation elsewhere. Just like containers revolutionized shipping and realized China's position factory of the world. So too will new networks and p2p decrease transaction costs and revolutionize the delivery of content. This will lead to globalisation of the content market and the infrastructure will lead enable all kinds of innovations from babycams to immersive content. If there are excess profits to be made in the content market by advertising and pay-per-view models, there will be new entrants into the market, the breadth and hopefully the quality of the content will go up. This will redistribute the wealth in the market to such an extent that the big advantage of content owners over ISP's that some see will disappear. Efficient markets hate long term excessive profits for an entire industry. Though one compnay may prosper because of enormous economies of scale and network effects.

Therefore the conclusion is:
New applications will demand more and more bandwidth. Their combined usage will compound to the problem. This will push ISP's to deliver more bandwidth and traffic. Users will be paying for this one way or another. If the market doesn't provide for bandwidth, government should. ISP's taxing those who design applications that use high bandwidth is not a solution, it would be a disaster. We need innovation both in content as in applications and services In order to relieve backhaul local traffic should stay local and local interconnection should be possible between ISP's and private networks, see NDIX for a great example (yes I once worked there).

Saturday, August 04, 2007

Wishlist for Google Apps Enterprise

This is a wishlist of stuff I would like to have in my company to make my life easier. It's all about how we deal with information in organisations. There is so much information in companies. Most of it is tacit knowledge. This kind of knowledge is locked away in peoples minds, mailboxes, bookmark lists, rss-readers, implicit references in memo's, discussions, interactions. In the end it comes down to google's mission, to make the worlds information accessible. Microsoft gave us the office tools to make information, but failed us badly in making it accessible. I've written it with an eye to Google, because they seem best postioned to deliver us some of these advances, but hey anybody can try and realize this dream be they Microsoft, Zimbra or Open Office.

My Google Wishlist:

- Google Reader with Google Apps for enterprises. This way it should be possible both to see what feeds your coworkers subscribe to, what is hot on those lists, share the most important articles with your coworkers etc. And for good measure it should include a company Digg/delicious function.
- desktop and company wide search
- Google Reader Enterprise version with sharing, searching, mining, statistics on what is most read, shared, dugg etc.
- in-company social network like pages to replace those tired phonebooks with myspace/orkut like pages. This can also provide clues on the projects we're in and therefore a web of relevance
- Google proxy sniffer (might be a privacy/security concern) that analyzes via the proxy what webpages are read most and therefore are important for our company.
- Google Wiki - Well they own Jotspot already, give it back to us and let every company grow it's own wiki or else we'll use socialtext, confluence and centraldesktop
- Google Grandcentral to be finally able to manage our internal telephone system including IM and let that be well integrated into our Calendar function, so that when somebody calls us, the system knows what to do and reach us properly
- Google Blackberry functions. For the love of me I don't understand why the Crackberry can only function in such a limited way for incompany use... make it useful. let me access all my company information on it, not just my mail, but also my intranet
- Google IM... Buy and build the best incompany IM system, that can interact with other incompany IM systems just like e-mail systems can interact, without the need for a third party to be in the middle
- GMail/Calender etc Enterprise, without the need to host it at Google, but to be able to do this in company or at a third party. The apps are cool, but big companies never want to give everything to Google. They just want to give it to Suresh of Accenture in Bangalore.
- Google Spreadsheets that can actually integrate the data of the spreadsheet with the real world out there. So if I make a spreadsheet showing sales per region, I can push one button and get a map overview projected on Google Earth, integrate it with stats from the national bureau of statistics, or hook it up with data from Google Finance. Or that can actually animate the information in the spreadsheet just like the Gapminder software they bought of Prof. Hans Rosling. (Google him, he's brilliant)
- Google Document Management System, that actually allows us to manage documents in the way we want instead of in the way the idiots of hummingbird and documenta want us to do things. I don't want to fill in a gazillion fields to store one document. I want to make it, store it, retrieve it and share it, without everything becoming too hard.
- Google company blogs
- Google subscription manager. Companies have many subscriptions to magazines and newspapers which allow the access to archives. However employees never have the list of usernames and passwords. Help us manage this.

Sounds like a rather nice businessplan for the Google Apps division.

Tuesday, July 31, 2007

Why the economics of Second Life fail

Chris Anderson, editor of Wired and writer of the Long Tail wrote critically about Second Life and wondered if he was too critical. My reaction to it can be seen below. I have visited Second Life, and I like the creativity that is displayed in it. I just don't think it can compete well with a combination of Real Life and other internet applications (WWW, IM, Wiki etc.). The reason for this as outlined below is that it lacks the proper economic basis of actually improving upon existing technologies and applications.

Chris, I don't think your being too harsh. James Au in his 'seductive long tail argument' sums it up quite nicely, a comparison should be done on the basis of: "length of engagement in SL, versus other ad mediums; quality of engagement, in terms of brand immersion and recognition; quality of potential participant, considering Resident demographics as content creators, bloggers, early adopters, etc." and "All that to one side, it is still nevertheless true that SL developers have yet to create an unambiguously compelling and unique example of real world advertising that is massive or effective enough to convince honest skeptics. (As I believe Chris and Frank ultimately to be.)" James believes that your opinions should be debunked and that a compelling case would prove him right. I however believe that the fundamentals are stacked even higher against Second Life than your articles already show.

There are certain rules that govern why a new technology becomes popular or not. These rules are deeply rooted in economics. The main rule is:

- It makes some part of your life easier/better (Optimizing of utility functions).

It does this by:
  • Lowering transaction costs for performing a certain function (Looking up train timetables has a much lower transaction cost than looking it up in a timetable book or calling a number)
  • It grants you more control over your choices. Though you might spend an equal or higher amount of time and money doing what you were doing before, it allows to reach a more optimal solution than you could before the introduction of the technology. (Travel sites, housing sites etc all tend to swamp you with options, but many people like that compared to the old situation)
  • You receive a higher level of service compared to the old situation.
  • It achieves some kind of network effect. Adding more users to the network increases the individual and total utility function of the nodes in the network at a higher than linear rate.
  • A new niche emerges big enough to cater to your needs (look at the long tail).
All in all it's about lowering costs and changing utility functions.

Now what happens if we look at Second Life and similar online worlds and measure them along this yardstick of making your life better, lowering costs and changing utility functions.
  • The main thing Second Life is good at is bringing people from different parts of the world together and let them interact in a relatively natural way, like in First Life.
  • A second thing it is good at is quickly building 3 dimensional representions of objects and allowing people to see and interact with them.

Now compare SL to some of the other applications we use the net for and we see why it is not a big a hit as some people would hope it would be for advertising and other applications.
  • For finding information and weighing different options etc, the plain old Web 2.0 is better and quicker. You go to Google and from there its two clicks away to the right destination. As various studies have shown... people are not willing to wait more than a couple of seconds to find the information they are looking for. In Second Life Just getting somewhere, orientating, interacting etc takes minutes. So if you want to find information, disseminate it the Web wins. Same thing goes for buying real stuff.
  • For communicating Second Life is bound by proximity in the Second Life world. It's almost like the real world in that regard. So in many situations it gets beaten by instant messaging, email, telephone etc. as a way of communication. It does have a bonus when it comes to chatting up complete strangers by allowing only a limited amount of people to step up to you (whereas on a chatsystem women often get swamped) So as a virtual bar it has some positive sides. - If one wants to interact with customers in Second Life, it is not unlike opening up a store on the main street and it should be run as such. 24 hours a day people should be attending to the shop. You need the right people there, they should be knowledgeable etc. There are no Coca Cola information stores on the main street, because it would be too costly for Coca Cola to do so and there doesn't seem to be a benefit, compared to their current way of doing business. Second Life might lower these costs by allowing you to open up one store and reach the world, but the question is what exactly would be the added bonus for Coca Cola, Fannie Mae, Home Depot etc of having a virtual store/information boutique that needs to be manned 24x7 compared to a combination of Website, information phone line (or IM on a site) and if they have such a thing a physical store. That question is very hard to answer.
  • As an advertising medium Second Life can house billboards. The advertiser hopes its as busy as Times Square or as well visited as The numbers are not such that this seems to be a very attractive proposition. Not too many eyeballs and not always the right demographic, plus like in Real Life people need to bump into it. The more Second Life grows the smaller the chance of bumping into the advertising (the reverse of network effects) So Real Life and Google Ads are probably a better way of spending the advertising budget for many.
  • Second Life can also be an advertising medium by realising a 3d representation of your products. However a website can often have the exact same possiblities, plus the added bonus of being able to control the look and feel of the experience.
  • Advertising through immersive mediums in Second Life (scavenger hunt, adventure type) is limited by the environment of Second Life itself and the amount of users it has. It's probably more effective and efficient building that world in Flash, without having to deal with all the side effects Second Life might have.

So where does this rant bring us. Second Life's usability for making things better is mainly limited to those situations where we want people to interact in a bar like fashion, but without being in the same physical location. A virtual book signing might work, but a well moderated chat session on Amazon might be alot more effective as it could let people join easier, without making avatars etc and still allow for it to be streamed and stored on Youtube and people could actually receive a signed copy of the book they bought. (BTW why doesn't Amazon have interactive sessions with writers? Or did I miss something)

A big advertising campaign however is probably much more effective when using flash and other such technologies on your own site combined with a proper on and offline campaign.
It's hard to see what kind of bonus SL has when it comes to working in project groups compared to an adequate set up of videoconferencing, group wiki's, IM etc.

All in all, the conclusion of my rant is: Economics doesn't support virtual worlds as a replacement for the web and the real world.

Wednesday, July 18, 2007

Yahoo's Earnings seen from Jim Collins' Good to Great

I reacted over at Giga-Om to this story:

Interesting numbers from Yahoo! I’ve been reading Good to Great by Jim Collins at the moment and it seems that Yahoo is showing all the signs of an also ran company. (have a look at

First, it didn’t have a level 5 leader, who selfishly looked at the best interest of the company. They may have ousted him, but one hopes the current management team is up to snuff.

Second, the question is whether they have the right people on board and in the right place. They’re hiring seems to be erratic and not really best of breed in the business.

Third, and really bad from my point of view. They don’t confront the facts. Fact is, Google is a better ad company than Yahoo will ever be. Google places its ads better, through better technology and because of that has commanded the biggest share of eyeballs. Yahoo has nothing to offer to potential advertisers on the technology front.

Fourth Yahoo doesn’t know what it wants to be: Is it the best content company in the business, or is it the best search company in the business? Well, it cannot be the last one. Google has taken that and outspends Yahoo in keeping that goal. Google has a clear focus, though it should dabble less and spend more time in achieving in coherently achieving the goal. Yahoo should focus on its content business, which in the US is the best it can ever be. Yahoo should make it their business to make sure that whenever somebody is looking for something that Yahoo does Google first points to Yahoo for the content, to such an extent that the user will go to Yahoo without even looking at Google (Yahoo financials is a great example)

Fifth, Yahoo should cut all the crap it isn’t willing to deal with. Sell the advertising business to Microsoft, they’re dumb enough to pay top dollar for it. Then move to Google’s Adsense and squeeze every ad dollar out of it. Advertisers would love it.

Sixth, keep focus, keep discipline, keep adding great content, keep focus, discipline, add more great content.

Monday, April 16, 2007

MS and at&t urge anti-trust measures against Google-Doubleclick

Slashdot reports that Microsoft and at&t are afraid that the Google-Doubleclick deal will hurt the competitiveness of the markte place. I really wonder about it and I think it's mostly sour grapes and net neutrality that is playing here. A short rant/analysis from me was the result. I'll post it here as well. There is more to it, but I don't have the time to do a full analysis.

Interesting that AT&T joined in. They are moving against Google to support their Net Neutrality position. But let us look at how much money there really is in this market and then see whether an almighty Google might actually be able to hurt AT&T. Google currently makes 10 billion a year from 281 million broadband users worldwide. That's is $35/broadband user/year or $2.90 a month. Just look at the price of AT&T's offering and you can see that Google's ARPU is no more than a few percent of AT&T's ARPU (Average Return per User). Google's ARPU is supporting various content offers through this businessmodel, more than 40% of the ARPU flows to the content owner. So at the moment AT&T can beat up Google for a maximum of $2 per month per customer.

So how big could Google's ARPU grow? In a country like The Netherlands 5.7 billion a year is spent on advertising to about 7 million households. This makes 67/household/month (and this number isn't growing too much) This is the total advertising expenditure on the national market and includes all major media: Newspapers, television, direct mail, cinema, magazines, billboards, internet etc etc etc. If Google can get part of that on a global scale, it amounts to a major amount of money. But now look at it from ARPU point of view. It would be hard for Google to get more then 10-15% of this market space ($6-$10/household/month) because they would have to replace all the existing ways of doing advertising and these are still powerful and sustain many content business models)

If a telco can his hands on google's revenues, they might be able to knock a few dollars of the price of a broadband connection. But $6-$10 isn't going to pay for the line and the costly upgrades. Just go and look up the financial information of telco's to see how big they are and how much money they spend on a yearly basis. Google is dwarfed by that. (Broadband reports said that telco's would spend $41 billion on network upgrades just this year, Google made only $10 billion last year) Odlyzko was right when he said: "Content isn't King" and we can add to that "Advertising will never be king".

So when AT&T says that Google is making money over their networks. We are talking about change compared to what AT&T is charging its customers.

Will Google get a dominant position? Only if they offer content providers the most money for showing a banner and advertisers the greatest amount of clickthroughs. That is why Microsoft and Yahoo are loosing out. The offer less adviews per day, that generate less clickthroughs per thousand adviews and pay less per click and offer advertisers less conversions. Why would you use them? Nobody in the equation is getting better by using Microsoft and Yahoo not the content provider and not the advertiser.

Now lets hope Google pays some attention to my pitch for Adsense for Charity The idea is that anyone using Adsense can designate a percentage of their Adsense revenues for good causes or open source projects. Even if we are only talking about a very small percentage of Adsense users doing this, we still would be talking about millions of dollars per year) So please help out in spreading this idea, by linking to it or spreading it onwards.

Monday, March 05, 2007

Adsense for Charity (English version of Frankwatching article)

Frank Janssen of gave me the opportunity to pitch my idea for Google Adsense for Charity at his site. I hope his readers will help me generate more attention for this idea and come up with ideas to get this idea higher up Google’s to-do-list. As said in previous posts, the origins of this idea lie in me looking at the enormous amount of $8 on my Adsense account and wondering if there was something better to do with it, instead of waiting 8 years for the first check. I have also found out that the idea is not unique. Two weeks before I blogged about it, Michael Yarmolinsky of alsoasked Google for this possibility. Google’s first reaction to me has been that they will have a look at it.

The idea
It would be great if it would be possible to select in Google Adsense that (part of) the revenue will be sent to charity. This way it will become easy to contribute to open source projects or other good causes. This will increase the income of those charities. It will also become possible for accounts that generate little revenue to send the money that is there to a charity. (And yeah, Microsoft and Yahoo can also implement this idea, but unfortunately for them most of the money is at Google at this moment)

Possible ways to implement the idea.
Account-owners can specify that all the revenues of their Adsense-account will be sent to one (or more) charities. At the end of each month the revenues of the account will be transferred to the charity, regardless of whether they have reached the limit of $100.
-Account-owners can specify that x% of their revenue will be sent to charity. At the end of each month this percentage will be sent to charity.
-Account-owners can sent a fixed amount per month to a charity, if this amount is generated by the account. The remainder is sent to the account-owner (if it’s over $100)

What charities?
I personally don’t care. They may be American, medical, Unicef, Open Source, the Bill and Melinda Gates Foundation, just as long as they do good. But Google probably will opt for a practical solution.

Advantages for the account-owner
The advantages for account-owners are in ease and simplicity. If an account-owner would have to do everything himself, he/she would first have to cash the check and then transfer the money (internationally). That’s a lot of work for small amounts of money. It also gives the account-owner a good feeling, that the money that used to be locked in into a small revenue generating account is put to good use.

Advantages for Google
Google will be able to improve on it’s “don’t be evil”-image. It will help charities (maybe open source projects. Google will profit from this in good PR and maybe better open source software. It will also keep away discussions about small amounts of Adsense income that are locked into an account. Another advantage is that it will make Adsense more attractive for a larger group of websites, which in turn will improve the reach of Adense and it’s attractiveness to advertisers. Keeping score of payments to charities online will only help here. There might be a small issue with Google loosing some interest on the money, but this is probably small compared to the goodwill. There are hardly any costs for executing this idea. Google will only have to screen charities.

How much money are we talking about?
Google had $10 billion in revenue this year. Almost all revenue is generated by advertisements. Google pays out about 40%. That is $4 billion. I assume that this idea is a long tail idea and it’s aimed at the end of the tail. The end of the tail is the last 0.5% of the revenue or $20 million a year. 99.5% gets paid to people that have an Adsense-account. But even if it was only 0.1% or less, it’s still an interesting amount of money.

Google has replied to my suggestion, that they will look into it. This is great, but I would prefer hearing that they will implement it. The sooner, the more money there is for charity.

What can readers do?
Spread the word! Blog about it! Send it on to a Google manager you know! And help me find better ways to get attention to this idea. All your comments and ideas are welcome.

Friday, March 02, 2007

Reaction from Google to the charity suggestion

Well, Stephanie from the Adsense team replied to my idea and she forwarded it on. Let's hope we'll get a feature allowing to donate (a part of) the earnings of a page to charity (or open source). I'm still thinking about new ways how this could work out, like being able to donate part of the money to open source projects as a thank you for the tools. By enabling people to donate directly without them having to do anything for it, it is more likely that more people will donate (part) of their earnings with Adsense to some worthy cause.

Google's e-mail:

Thanks for your thoughts on enabling publishers to donate AdSense earnings
to charity. I'm happy to pass along your comments to our engineering and
product teams.

Suggestions and ideas like yours directly contribute to making AdSense
better, and we appreciate your perspective. Please also feel free to
submit any future suggestions through our online form:


The Google AdSense Team

World Health Statistics, but cool!

Frankwatching pointed me to this great presentation at this years Ted conference by professor Hans Rosling of the Karolinska instute in Sweden. He gives a great talk on the misinterpretation of the third world, but the best thing is: He has cool graphics. He for instance shows how internet disseminated in the world, through letting balloons go up on a scale and you can see the great differences. Have a look at his presentation here, or play with the data online at Google.
He also has an organisation around it:
This is also very much a pointer, why governments should open up statistical data.


Wednesday, February 28, 2007

Appeal to Google! Use Google Adsense for Good!

Please join me in this idea! I sent the Google Adsense people the following suggestion/feature request. The Google ads that you see around the page don't generate much income and well, I don't really care about it, they're partially a service to let you find interesting companies and partially a way for me to keep track of statistics (before Google Analytics came around). It's a bit of a long tail idea, where many small sites help generate a big amount of money for charity. I hope some people in the blogosphere help to give this idea some thrust and also encourage Google to allow people to easily let a charity become the beneficiary of the revenue the Google Ads generate. Below you find my mail to the Adsense people.

"Hi, I would like to suggest that you add the option to allow people donate the money they generate with Adsense directly to a charity of their choice. I currently have made the whopping amount of 8 dollars with my blog, with the current speed I'll get paid my first check in 12.5 years. And really I don't care that much about that money, but if I could opt to send the money to a cancer or handicapped people charity, I would love to do that. The little bits of many little blogs will probably add up to a nice sum for the charities involved.

I understand that it might be difficult to give a full range of charities but even if you would use only American ones, I would still send that little 8 dollars a year there. Your financial department might not like the idea of not generating the interest over all that unclaimed money, but that's small fries compared to the good it might do.

I'll also post this on my blog and hope other people join in."