Friday

FON versus Joost versus Meneame

Which company is doing better in Google Trends Spain?

Spain, last 12 months

fon joost meneame


Wednesday

Global identifier of Internet adult content

[Story]
Back in May'05 and before starting my Network on Chip Master Thesis I proposed another type of project as my master thesis. The topic had nothing in common with my actual field but I though I was able to come out with a better solution to identify Internet adult content, but not to censure it. The project was rejected because telematic is not a strong research field in the ULPGC -which are microelectronics and signal processing- and the tutors I tried to convince were more focused in low-quality and short term profitable projects. While I can understand that short term profits are really important for a company, I think university research should be long-term high quality research.
Since at that time I did not have a blog I had this idea in a drawer until today ;-). Be patient and don't expect a high-quality analysis, because my main notes are back in Canary island and I am writing as I remember things from here in Netherlands.
The idea was simple. There are a lot of entities spending money in banning adult-content webpages, from search engine filters as Google to adult-control filter for browsers. To be honest I understand this situation, since I have a little sister (6 or 7 years old at the time I studied the project) who uses internet. Thus, I was afraid about how adult content could shock a child at an age when they are not able to process these kind of things no matter what you teach them.
[Main idea]
The idea of the project is to change the view from an extremely cost individual banning to a cheap global tagging. To do so the proposal is a train ticket model in which the webmaster tells to the tagging entity whether or not the webpage/domain is an explicit adult-content. The tagging entity trusts the webmater's tagging, still there is a revision process which is focused on alerted webpages and if an adult content page was tagged as a non-adult content page, the webmaster will be required to pay a dissuasive price in order to retain its domain name.

What's the mystery? The global idea is simple but not the implementation since right now there is a lot of complexity in the management of the domain names so let's take a look to this complexity.

The Historical Context
-1993, Network Solutions, Inc. (NSI) was granted an exclusive contract by the National Science Foundation (NSF) to be the sole Domain name registrar for .com, .net and .org Top Level Domain (TLD) names. NSI also maintained the central database of assigned names called WHOIS. Network Solutions acted as a de facto registrar, selling names directly to end users.
-1998, on January 28, Postel, on his own authority, emailed eight of the twelve operators of Internet's regional root servers and instructed them to change the root zone server from Network Solutions (NSI)'s A.ROOT-SERVERS.NET. (198.41.0.4) to DNSROOT.IANA.ORG (198.32.1.98). The operators complied with Postel's instructions, thus splitting control of Internet naming between IANA and the four remaining U.S. Government roots at NASA, the .mil server, BRL and NSI. He soon received a telephone call from a furious Ira Magaziner, President Clinton's senior science advisor, who instructed him to undo this change - which he did. Within a week, the US NTIA issued its "Green Paper" asserting the US government's definitive authority over the Internet DNS root zone.
-In 2000, Network Solutions(NSI) was acquired by VeriSign $21bill
-2003, In negotiations with ICANN, VeriSign gave up operation of the .org top-level domain in return for continued rights over .com, the largest domain with more than 34 million registered domain names.
-In mid-2005, the existing contract for the operation of .net expired and five companies, including VeriSign, bid for management of it. On the 8th of June 2005 ICANN announced that VeriSign had been approved to operate .net until 2011.

Light in the craziness of the domain names.
The key element is to work with the WHOIS database. Thus, the WHOIS database manager, as the tagging entity, should add a new task: To examine with a final human report, but with a computerize process whether or not a webmaster follows its web tag, and if the webmaster does not follow it, to increase the fee of 6$/year to keep that domain name. Thus, it does not matter which company is the registrar, and with the extra money the WHOIS database manager could afford to do a detailed human checking of each web page that has an alert. In this system it is key that search engines or other companies that crawl/index the web send alerts of suspected pages to the tagging entity. Otherwise, the tagging entity has to build a crawler by itself, but it is easy and cheaper to reward each successful alert made by the search companies. Therefore the tagging entity would value the quality of the alerts from different suppliers(I am thinking in google as the most capable) and would make a weighted queue of pending alerts taking into account the supplier history of success.
This is the main idea, but there are a lot of elements to research, for example, I have thought in 6 tags:
1) explicit adult-content.
2) unclassified(reasonable alert system and no extra payment.) Change the status to explicit adult content if that is the situation. It will be predetermined status at the beginning.
3) non-adult explicit(soft alert system, extra payment and change of tag if needed). Adult content sites are 80/100% explicit adult images so with this intermediate tag it should be extremely difficult to make a false alert in global content pages as http://www.nytimes.com/, art pages, blogs or any other "normal" web page with less than 10/30% of explicit adult pictures.
4) Explicit child-content(high alert system, extra payment and change of tag if needed).
5) Multidomain. Here is the biggest problem of the project. Example: www.ulpgc.com/USERS/John. Also, it is the part where there is space to design and implement a solution.
6) Untrustable multidomain.

Therefore the idea of the project was to clarify all this scheme and implement a small prototype, so the project tasks are as follows:
[Main Tasks]
Main task 1:
-Global solution for simple domains. Study tags. Add element to Resource Record. Create secure policy to access the tag or use WHOIS protocols.

Main task 2:
-Multi-domain: Example: iuma.ulpgc.es/USERS/alumnos/JohnTravolta

Propose a special tag(Multidomain) to delegate responsibility to a local server.
Create a secure protocol to connect a tag query to a WHOIS database with an answer from the local server which manages that subdomain.
Protocol for communication between the WHOIS database manager and the local server, which main goal is to change the local server tag status from the WHOIS database manager. If the local server does not follow this protocol to change a tag, the WHOIS database manager will set the status of such multi-domain to untrustable multidomain and there will be an economic sanction when the multidomain renews the domain name.
Main task 3:
Create prototypes of all these elements and study ef
ficiency and security of proposed protocols. Study how the tags are propagated along the DNS servers (Times, efficiency, etc.).
[FAQ]
So this is, as far as I remember it, the draft of the rejected project. I understand you might have some questions, so here I write my answers to the typical questions:
Q1 Multidomain is key. A1: Yes is key and that's why this project is interesting.
Q2 DNS is not only about WEB. A2: Other services can be tagged as unclassified.
Q3 It will be like censure. A3: The idea is to be flexible and just to improve, not to be a policeman. The idea is that you put the filter in your browser, is never put by an ISP or a government. NEVER. If it is not possible to guarantee that China won't use this tags to censure is better to close this project forever.
Q4 People can access adult-content directly through ip protocol jumping the DNS. A4: AGAIN, the idea is not to censure, the idea is to improve search engines, child filters for browsers and the typical click access from one page to another. Thus, only if the final user wants to filter the tags will be used(You can always access whatever you want or, on the other hand you can filter adult tagged pages from your browser. You make the decision. For example if your are a porn addict you can make a search engine to search only pages with the tag porn. This project is not about censure, is about efficiency).
Q5 Once this project prototype is made is more about politics than about engineering. A5: Absolutely true.
Q6 I don't see money to support the final human reports. A6: The domain cost will increase if users lie about their tags. That money should support the human report cost, but is important to have an automated engine to make trusted alerts(As I said I think google and other search engines might have a good one.)
Q7 It is dangerous to have an entity controlling web content. A7: It is not web content control, is tagging identification available to the final user. In addition it is only adult/non-adult tagging, and there is a soft detection (sites with less than 30% adult-content) to make 0 false positives. At the same time the point is to support net neutrality; I can see some extremist "family" groups clamming the ISP to filter tags with adult-content and big Washington lobbies(Don't forget that ICANN is controlled by USA) pressing USA congress to make more tags and to change the soft detection(over 30% adult content) with a hard&stupid detection(over 0,1% of adult content). To answer this, I can say that we need to be strong supporters of net neutrality, and we will need that support with this project and without it, because in the next years we(as the Internet community) will be facing a lot of stupidity/insanity coming from lobby groups.

The Italian Man Who went to Malta



Besides Italian, Spanish guys also fit in the video ;-).

Monday

Nautilus script line count in Ubuntu Gibbon 7.10

[Story]
I was in an interview two weeks ago and the interviewer popped up an unexpected question.
Interviewer- How many code lines does your Master Thesis Project have?.
Ray-"To be honest I have NO IDEA". (My NoC Simulator is a modular system with around 125 source files)
Interviewer-But can you give me an approximation?
Ray-"I have not a clue" [(You should view his face). I tried to solve the situation by giving him the CD with the project, but he didn't take it so I guess I lost one job opportunity.(The interesting thing is that I believe this department needs me more than I need them. There are not many engineers with the background, passion and enthusiasm this research topic needs, and I have all of them ;-). Life is just like that, my experience tells me that some times when you follow a different path that the one you were planning the result is much better than the initial plan.)]

[Problem]
What do we do to count the number of code lines along many files and folders?.
[Solution]
A Nautilus script.
I searched internet and I found an script package from "Nicolas Cuntz (ni_ka_ro), 16.4.2005" with a line count script implemented, but since it is an old implementation it does not work with the last version of nautilus and bash(or at least it does not work for me).

I have updated some lines to solve the main problems, and right now I have a working solution. It is important to point out that there is an error in the execution, but does not affect the counting. In the future I will solve it and I will make a clean script but in the mean time you can download this functional version here.

-How to install&use it
Extract it and move the files to the folder /home/user/.gnome2/nautilus-scripts/
Don't forget to move the hidden folder '.scripts' to the same destination.
Give execution permission to all the files inside .scripts an also to 'line_count' script in /home/user/.gnome2/nautilus-scripts/
That's all. Select the folder with the project. Right click->Scripts->line_count.

Yes I know, the recursion along folders is just great.
I hope you find it useful and don't forget to reply me if you have a better solution.

Income tax in the Netherlands

Talking with my friends here in Eindhoven, I was surprise about their knowledge of the amount of taxes they pay to the government.
First she told me that the tax was about 30%, but one day later she called me back telling me that she was wrong the first time because the real tax was 42%. I was surprise, but I thought she is the one that is paying taxes, so she should know really well the amount of taxes she is paying.

I was assuming she was right until yesterday, when I realize that she might be wrong because I had the opportunity to take a look to the gross and the net amount of a salary and it was quite clear that taxes weren't 42%.

Thanks to Wikipedia right now I know how easy are the taxes to the incomes in Netherlands, because it is a step based income taxation.
Update: It is not that easy. I made a mistake because the steps taxes are only one of the aspects of the final incoming tax and they are named as Box 1 taxes. There are two other taxes, but a new worker might not use them because they involve account savings and investments(box3), and substantial business interest (box 2). It is important to point out that it is incorrect to use directly the Box1 taxes as the final taxes(that was my error) because you need to subtract an amount of 2043€. Yeeeeees! I am happy I was wrong.

I made this easy sheet, in order to have a good answer if a friend ask me about the taxes here in Netherlands.
You can play a little bit with the sheet by introducing a new value in the gross salary per month.

Sunday

Power consumption along Ubuntu versions

Phoronix has published an article about power consumption rates along the lasts versions of Ubuntu.
Here are the main graphs.


It's interesting because with the last kernel it seems that the power-consumption goes down a little bit, which is extremely valuable because as it is stated in Phoronix there are more processes running in Ubuntu 7.10 than in 5.04.
Thanks to Phoronix benchmarks we can conclude that with the last Linux kernel there is no revolution in terms of power consumption, but on the other hand we know that Linux Kernel people are working on this thread. It's also interesting how Intel has started to work in order to solve this situation. To do so they have started building a tool, PowerTop, that identifies which applications are consuming the most power and thus draining the battery. Eventually, this tool will allow developers to optimize their applications for maximum power savings.

If you want to read the full power-consumption article is available at Phoronix.

Monday

5 Myths About Sick Old Europe

In the global economy, today's winners can become tomorrow's losers in a twinkling, and vice versa. Not so long ago, American pundits and economic analysts were snidely touting U.S. economic superiority to the "sick old man" of Europe. What a difference a few months can make. Today, with the stock market jittery over Iraq, the mortgage crisis, huge budget and trade deficits, and declining growth in productivity, investors are wringing their hands about the U.S. economy. Meanwhile, analysts point to the roaring economies of China and India as the only bright spots on the global horizon.

But what about Europe? You may be surprised to learn how our estranged transatlantic partner has been faring during these roller-coaster times -- and how successfully it has been knocking down the Europessimist myths about it.....

....Continue reading it at Washington Post

Wednesday

An Image

Writer’s Tools

Writer’s Tools is an all-around tool designed to help OpenOffice.org users to perform a wide range of tasks. It makes easier to backup documents, look up and translate words and phrases, manage text snippets, and keep tabs on document statistics.


It is available here and if you want more information before downloading it there is a good manual here.

Sunday

Documentation Generator

Just in case you haven't used a documentation generator before I strongly recommend it for some situations. For example it is very helpful when you study a very big project or a hell of a code. In my field, Electronic Engineering, there are tons of engineers coding C, C+Classes, or C+Classes+SystemC with no love to the source code.
In this situation I suggest Doxygen as a documentation generator. You can use it for C++(SystemC), C, Java, Objective-C, Python, IDL (Corba and Microsoft flavors) and to some extent PHP, C#, D and ActionScript.

Friday

Speed up Open Office

I couldn't go to bed without sharing this tip with any Open Office user.
The original(and better) info is here:

To sum up:
Start Open Office and click on "Tools -> Options". This should open the configuration. Click on "Memory" in the left menu and change the following settings:

# Number of Steps: 30
# Use for Open Office: 128
# Memory per Object: 20
# Number of Objects: 20

Click on "Java" in the left menu afterwards and uncheck "Use a Java Runtime Environment". Click "OK" and restart Open Office to see how fast it is now.

=-o
F%&k1ng Amazing.

Wednesday

Session startup problem in Ubuntu

I just want to write a tip about session startup programs problems. You may not believe this but it took me sometime to solve this problem, so I want to point out the solution:

Problem: You go to System->Preferences->Sessions to add a new program to the startup and the program seems to be added, but when you re-open System->Preferences->Sessions the change did not happened, thus the program is not in the session list.

Solution: There is a permission error in the .config file. Execute the following:
sudo chown -R username:usergroup /home/username/.config/

Replace username and usergroup with your username.

That's it.

Thursday

Is Joost going to crash?

There are many people in the blogosphere analyzing how Joost would crash. They mainly focus on how they view Joost as a TV on a PC, with the classic TV channel structure, which at this point of the way(Youtube has offered freedom to video, The freedom of the media!!!!) is like woooowwwwww, Joost has an old fashion view of video...
I won't say it is not true, but instead of continuing with that view I will approach a Joost analysis with some engineer flavour.
So how Joost would crash?
Let's focus for a moment on how is Joost building its TV distribution empire.
Joost is basically a peer to peer(p2p) platform with super node support. Simplifying, it is a torrent protocol where the seeder has tons of bandwidth available, with a huge difference in how Joost sends data to the client. Joost tries to send to the client the initial packets of the video rather than the best for the health of the peer to peer network; which is great for real time TV but it might suck the efficiency of the peer to peer network.

Now let's take a look to Joost principles:
Its ground rules are:
-No firewalls.
-No hardware load-balancers.
-High availability (this is TV).
-Lots of bandwidth (this is TV).
-Rapidly provisionable.
-Business requirements.
-Cost-effective.
While the basics of the protocol are:
-Joost servers are the original seeders of hardware
-Joost servers also handle the “long-tail” (which is still pretty long)
-Joost server "tops-up" the DSL "bandwidth" gap.
-Client first contacts the super-node, which handles control traffic only and direct clients to peers. Peers are re-negotiated frequently.
-Each video stream comes from multiple peers.
-Joost does not do buffering, and they support this theory just by saying "people change a lot of channels so with buffering we lose tons of bandwidth".

Learning the basics of how Joost network works, taking into account Joost opened positions, and what the company has been doing in the last couple of months; I do not get a clue of a thread focussed on a long term p2p Network simulation (And I know they have got a Network simulation Lab ;-)).
But what is long term p2p Network simulation?
I usually give this example:

Utopian Transport Company Example
A similar example to Joost peer to peer network analysis is the logistic of an utopian road transport company where you can control the traffic lights and signals, the insertion rate of new traffic, the number of roads, the ratio of low-traffic(trailers) and fast-traffic(ferraris), the policy of the roads(for example one road just for Ferraris or one just for trucks), and a lot of many other possibilities.
The number of variables influencing both the latency and the capacity of the system is too big, despite I only showed a few. Due to these huge number of variables you cannot take pen&paper and design the best values of these variables in your particular situation, thus the solution is approaching the optimization problem with a simulation phase that gives you the best values for your particular situation.
An example that shows simple traffic situations was designed and written by Martin Treiber. In that example you can evaluate traffic situations, how all the variables fit together; how the traffic insertion rate, the ratio of traffic type and the behaviour of every vehicle in conflict situations are critical in the performance of the traffic system.
Now click the link and play a little bit.
This simple example can help to understand the importance of modeling a correct system.
Imagine a more complex example, with 40 roads, 80 crossroads, 50 traffic lights, 30 origin packet nodes(therefore vehicles origin nodes), 50 destination packet nodes(therefore vehicles destination nodes), different traffic insertion rates and different insertion patterns per origin node. Now you are able to figure out how important could be the model phase in a "real example".


Network Simulation at Joost
So what is Joost doing at the Network Simulation Lab?
If I didn't misunderstood this video[From 1min 55seg to 2 min 45seg] they have focussed just on simulating the system on a client perspective. Let's see:
###We have a Network simulation lab, although haven't been used in quite a while, which is based on a bunch of daemons from FreeBSD to simulate jitter, latency and lost. And then we test the application against them. We have some confident if the client works in typical DSL environments.
So typically we simulate kind of interleaving:
-By having a variable from 0 to 80ms of latency randomly added to simulate jitter.
-Base level of latency from 50 to 500 ms.
-And then we add 10% of packet lost.
-We make the network half-duplex, because most of the wireless network are half-duplex.
-An then we see if the client works.###[Quote from: Colm MacCarthaigh]

Before going to the next step I want to point out that building Joost must be extremely difficult, and I am pretty sure that Joost engineers have worked extremely hard to build an incredible system that has surprised all the industry because of its quality, but once this is said, I feel that until now Joost engineers have focused on the well performance on the client view forgetting a global solution for the network and its long term cost and scalability.
If we take a look to the quote of network system administrator Colm MacCarthaigh, it is stated that they have focussed on testing how the client works under different internet access points; by changing the jitter, latency, packet lost, etc. under a client simulation environment. If we go back to our transportation example the bunch of Joost tests are like an evaluation of a path from a house to a highway (high-way access) forgetting other components that influence the transport system. Thus, there is nobody looking behind the health of the whole transport system. For example, if there are no houses needing packets near our house the truck may be late because the way is too long, so a police man prioritizing the truck with the packet against other vehicles may be a solution; or maybe the packet that we need is in the opposite site of the city. Imaging if people started to build their houses far away from the city center, forming a network topology that requires special policies rather than just building a highway near each house. Think about what could happened if at some point all the houses of the city require packets. Each of the above situations can be a mess or at least increase the complexity of the transport system, but these are just a few situations among tons of many others.
The easiest solution is to build more highways and buy more Joost official transport trucks. You will need tons of money and study scalability based on stats from the super-nodes, the required bandwidth per month per super node and some good traffic estimator; just as a typical internet centralized server; which is exactly what Joost is doing.
To sum up, their model is based on a client perspective rather than a network perspective.
But if right now Joost is working brilliantly with a client perspective, why is the network perspective important?
-Tolerable inefficiencies at the beginning might transform on huge problems in a big environment.
-The best policies at one time may not be the solution at each step of the way.
-Having only a client
perspective will work well at the beginning, because as a matter of fact they are giving more bandwidth that the one is profitable, but that is against Joost principles(Cost-effective) and against rationality if you have proposed a network with peers as a solution for Internet TV.

Does Joost want to behave as a classical streaming server all the time? Here I don't have any stats but I bet that right now Joost's super nodes serve more than 50% of the data.
Can Joost scale as a classic streaming server and have benefits if we take in mind that in its shared revenue model the money goes to the media companies?
Do not forget that Youtube revenue system is almost 100%(and many times illegally) for Google, and a Google with tons of cash can support Youtube bandwidth and eventually improve video quality. I am not saying the shared revenue view is bad, I actually think sharing a lot(more than 50%) with the content creation industry is the only way, but that is not my field. Also, do not forget that traditional TV distribution is really cheap for media companies, so they won't go to a market giving up 50% of publicity revenue; but as I said this is not my field.

Network perspective simulation:
There is a question coming up; how should Joost implement a simulation phase of the p2p network, and why is it the solution?
Actually, it is NOT "THE solution", but it would help to get a better solution, one that adjusts better to the necessities, which are not visible analytically, they are behind how Joost works and how the users would use it.
How Joost works:
1-Joost is basically a p2p platform with super node support. Joost servers are the original seeders of hardware.
This means that Joost shares the responsibility of serving the TV with peer clients.
2-Joost servers also handle the “long-tail” (which is still pretty long).
This means that there is a long tail of TV programs where Joost has to seed actively.
3-Joost server "tops-up" the DSL "bandwidth" gap.
I am confused about this quote. Does it mean that they lend you bandwidth but then you have to give it back? I have noticed through some of Joost's conferences that they are expecting people to let the program running all the time. In my opinion, expecting this from all users is just expecting too much; at least without a protocol policy to promote upload on the client side.

The job of a network perspective simulator is to go behind how Joost works to find better ways to cover the necessities. Therefore, the simulation phase helps to evaluate different aspects where Joost is involved. Here are some examples of variables:
-Probability of online users.
-Watching time and seeding time.
-When is Joost on watch mode and when on sleep mode(uploading).
-Channel change rate.
-Channel view ratio.
-Client upload bandwidth. Different policies based on this item.
-Joost main servers capacity.
-Latency and jitter per node.
-User types and behavior. Example: [eventual user ->0.14 hours / week] [tiny user -> 1.4 hours / week][active user ->14 hours / week][master user -> +24 hours / week]
-etc.


Besides different variables
there are lot of different situations and policies, which manage how the data is sent between peer to peer and peer to server. I show a few below:
-What is better, to prioritize new downloads or to prioritize old downloads?
-Study how the local data influences the amount of bandwidth needed from the main server.
-What is the best policy on the client's Hard-Disk (CHD) to keep the health of the network?
-Evaluate different p2p network policies to prioritize latency or jitter. QoS strategies. Number of minimum seeds sending data, etc.
-Little buffering must be a good solution in many situations. Value real cost and its benefits(more data with buffers in the user side->more stability, buffering improves video quality).
-How important is client seeding for the stability of the network. Can the system support peaks?
-Is it a good solution to relate the client Hard-Disk to client seeding ratio?

-How do we value the seeding ratio? Estimate the amount of publicity needed to replace the cost of the leeches.
-What are the best policies for network peaks and for a stationary situation?.
-Send data to some clients also when they do not ask for it just to improve the network's health. Policies to decide how and when.
-Send most used tv shows to users with best upload to improve the health of the network.
-Cost of client decisions. Value in money user's decision on uploading, zapping, etc., to reward what is best for the network.
-Best policies to decrease video stops caused by health problems on the network. What is the cost of decreasing these stops.
-Evaluate how big is the long tail.
-Availability of data on the network. Availability of the long tail data. How some data deficiencies in the p2p network affect the system. How client's Hard-Disk (CHD) policies affect this point.
-What happens to the p2p network when a few TV shows are having all the attention.
-Value the equivalent price, in terms of more/less publicity, to give more/less Hard-Disk or more/less bandwidth than the one expected.
-etc.

How can Joost crash:
I have explained the basics of the network simulation, highlighting that Joost is doing a simple simulation. Despite Joost is working well with a client simulation perspective I tried to explain why a network perspective is important for a long term solution. Now, is time to hypothesize some situations where Joost could crash in a long term view due to bad network approach.

H1.-Quality:
Quality of the video is important, but more important is fluency of the video(Latency and jitter). We will see that quality is strongly related to hypothesis H2 and H3.

H2.-Scalability. Can not scale:
Joost is giving more bandwidth than the one is profitable, which is against Joost principles(Cost-effective) and against rationality if you have proposed a network with peers as a solution for Internet TV. At the beginning you can support some inefficiencies, but with millions of users you have to look careful to the efficiency if the cost is key in your strategie.
The best policies when you are a baby(your parents give 200% for your well development) might not be the best solutions when you are an adult(at some point your parents cannot support you, you need too much money).
To help covering these inefficiencies I have proposed a simulation with a network perspective, something that seems Joost is not doing well.

H3.-Revenue. Publicity money versus distribution cost:
It is easy to have quality at whatever cost, but is difficult to have quality at a profitable cost. Being profitable depends both on cost and revenue, which mainly comes from publicity; and remember that media companies will get 50-80% of the publicity money. Thus, revenue depends mainly on cost and since cost depends on efficiency, the simulation phase to increase efficiency is key on obtaining profits.

H4.-Can not get interesting TV content:
There are many reasons for Joost not getting good TV content, but it is a fact that with more money for media companies the problem can be solved easier. More money for media companies equals less money for Joost, meaning that is key to be cost-effective. Since efficiencies improve when you evaluate more possibilities, the simulation phase will also be key on this issue. There is also an interesting thread on researching how to do better advertisement based on the data Joost has, but it is not against the hypothesis(nearly fact) that media companies will want more than 50% of publicity money.

H5.-To be rejected by users due to inflexible policies.
As I mentioned before is highly important being able to monetize different types of users. To accomplish this goal is important to measure the influence of each user type on the healthiness of the network. There are different types of users based on the upload/download ratio and the amount of HardDisk they share. For example, there are users willing to share bandwidth(upload tons of MB) but want as less publicity as possible, or users that do not care too much about more publicity but do not want to share a 1/1 ratio. Depending on how well is the healthiness of the network, due to user upload ratio and Joost main servers, you can be more flexible with the users. Here is important to study what would be the monetary consequences if a user prefers not to upload. Thus, the simulation phase will be also important on monetizing user behavior and link it to the healthiness of the network.
On the hand of different users based on HardDisk quota there is an easy example on Joost set-top box, which might not have HardDisk space but seems to be important in Joost's
strategy. If all the people share zero HardDisk Joost long tail will be giant; therefore, Joost has to value the cost of not sharing space with a full network simulation environment.

Joost future movements?
After explaining how the simulation affects Joost's network, and see the possible situations where Joost can crash, it is time to move on and take a look to the future. I am assuming Joost will realize the importance of a whole network simulation environment and they will start to build one. Also, I am assuming that the simulation environment will point to the client upload rate as a really important element. With these assumptions I am going to forget different policies that can improve Joost efficiencies(AKA forgetting the engineer) along with brief lines about Joost next movements(AKA I will take the role of president of the company). I realize it is pretentious not to restrict to network simulation, but I think that by giving some guide lines of the future, many people will understand the whole board better.
So here we go:

1.-Stop saying continuously Joost is TV so it requires a lot of upstream all the time. The fact that Joost needs user upstream does not justify that you are not opened to let the user choose. Give the user the right to stop or limit Joost uploads without closing it. Otherwise, Joost will be closed all the time and users will hate it. As I mentioned before I think this hypothesis is related to network simulation, because you need to understand how to scale the network with different types of users.


2.-Promote community. It is something is starting right now, but there is a lot of work to do outside the application, not just inside Joost. For example being able to view what similar users watch, be able to make a channel on the web, etc.

3.-Use Micro-formats to increase the relationship between web and Joost.
It would be interesting to promote the use of Joost by having web links to Joost content. For example, if you are blogging some funny gag that happened last night on your favorite show, the idea is to have a special web tag for Joost with info of the show, a "from:time to:time" and other interesting data. Then, if you click(It seems that your web browser will need some micro-format oriented add-on) the button will redirect you to Joost and play the funny situation the blogger is talking about[1].Moreover, if the video is on Youtube promote inside the blogosphere the use of a special script to let the user choose between Youtube view or Joost view. The key here is that capturing the funny moment would be easier on Joost than uploading the video illegally on Youtube; as simple as a "create link" button for the show you want. Imagine this situation with soccer videos, which are burning youtube and dailymotion illegally; there are many bloggers uploading tons of soccer games and tons of people watching it without anybody except Youtube getting revenue. If the content is available at Joost would be easier for a soccer fan to create a Joost link and say, "look to Cristiano Ronaldo's last goal!"; and it would be legal. I just pointed out soccer examples but I guess there are many different examples with other sports or topics, as the USA elections debates, Bush lies about Irak, general news, etc.
There is a lot of viable integration between Joost and the web by going hand to hand with the media content creators; such as a Joost web-video(youtube style) for Joost content as a way of promoting access to Joost programs, etc. Other example is to pay Youtube legal videos(the ones under fair use or with content rights) to create a button that allows you to continue viewing the program at Joost.
[1]Lately I have seen that Joost has worked a little on this issue with links to TV shows as in here [joost://08200k9].

4.-Joost for different devices. It seems that Joost is working in a set-top box and some people think will break the market. I would be more cautious, I rather say there is a big space there, not just build your set-top box. Find the way to be PS3, XBOX360 compatible. I know is difficult to have one client for each platform but there are other solutions, as streaming Joost from the computer to other systems(apple TV, PS3, XBOX360). No Linux version? I understand may not be profitable but should be easy and it is a beg of many Linux users, whom would try to code an open Joost if the official version is closed to Win and Mac.

5.-Make the most difficult decision, implement classic download capacity from different sources rather than only Joost content. The model to follow here is Miro that plays any video file[2], it is an Open Internet TV and has BitTorrent power. With an Open internet TV application, which means download videos directly from RSS channels, the feeling of openness will help Joost a lot. It is simple, with an open platform there is no need to install any other video competitor, so Joost would be the only installed program for Internet TV, what in return will make users happy as they will be able to watch whatever they want. On the other hand, if a content is at Joost the people would open Joost version; if you have the option to watch it right now or wait 20-40 minutes to complete the download what would you choose?. I also think a BitTorrent client mainly for TVShows is interesting because if a content is not at Joost people would use BitTorrent anyway, and what is better than downloading that video with Joost, getting users longer online, sharing upload between BitTorrent and Joost; Also, it is easy to implement a stop of all BitTorrent content when Joost is being watched.
[2]As far as I know, right now there are some patent conflict with the Codecs, so I do not know if it is really possible.

6.-Work on video quality. What is the video compression behind the scenes? I know is H.264 but there are configurations for H.264 with better quality at the average rate of 320 MB/down per hour. Since real time is not a problem for Joost, they can use H.264 with the best configurations for quality instead of real time. I guess the problem here is that, as everybody knows, best codifications are variable bit rate ones(VBR), which has peaks; and if the protocol do not let client buffering, codec choosing will be limited by peak video bit stream rather than by average video bit stream. Translating it: with VBR and zero buffering if your connection speed limit is 100KB you will have to choose a codec with a peak speed of 100KB, which could mean an average speed of 40-50KB giving up 50-60% of bandwidth, which is too much in H.264. With some buffering you can smooth this situation, but as Colm MacCarthaigh said, Joost do not buffer in order to save bandwidth because people change a lot of channel. Yeah, just a classical streaming server view. Mmmm, did I say that simulating the health of the network with different policies(i.e. different buffering strategies) would be key in Joost's strategy? As I said, the idea of simulating is to have more data to make better decisions.

Update: I have read recently that Joost has been using CBR instead of VBR. They were using CBR of for example 108,8 KB/sec (I actually realize it was less, but this is an example) and the quality was poor so they changed to VBR. As I pointed out before, thanks to no buffering policy, the VBR is limited by the peak rate rather than by average rate, so a global study of how the buffer affects the p2p health is important to make a decision based on network healthiness and necessities of the users(quality). There are different buffering strategies and I am pretty sure the answer is not black(no buffer) or white(1 minute buffer), there is live in the gray(from 500ms to two seconds).

Conclusion
Joost is one of the companies that is trying something different by changing things from the way they are. They have done a brilliant work building Joost system, but after studying what Joost is doing I have seen a leak on the performance of the p2p network simulation.
With this article I try to point out that pursuing efficiency on a system like Joost without a simulation of the whole network is like trying to drive blinded by a hood and not take it off just because at the beginning the road was straight and there were no cars. Now, if you did not take the hood at the beginning it is possible that by the time you realize seeing is actually important you are not able to take the hood off; all your energies are focussed on the act of not crashing. An even worst possibility is to realize after crashing that seeing IS important. In any case, being able to see does not transforms you into F1 pilot Ayrton Senna, you should work a lot and have natural talent and, of course, not forget the economic support of a F1 team behind you.


#Disclosure#
###I do not work at Joost or at any company against it. Moreover, I do not sell any network simulation environment. ;-).
In my opinion Joost is building something interesting but extremely difficult. Thus, they will face huge architectural problems, which I find extremely gripping.
I realize not everything is about having a great engineer solution. The key is to merge engineer solutions with market rules and user requirements.
My background is just MS on telecommunications and made, as a thesis, a system starting from zero to simulate Network on Chip(NoC). What is NoC? Just another dimension. ###

LINKS
-Joostteam
-Joost Network presentation.
-The famous Joost Network presentation PDF.
-Joost blog.

NoC simulator

Many of you know that my thesis was about network on chip (NoC). What you wouldn't know is that the thesis was focused on a long term NoC simulation solution instead of the typical small and inflexible simulator written for just a study. As co-project manager I decided that modularity and flexibility were important goals. Therefore, to accomplish these goals the design was based on C++ object oriented code, strong use of STL and scripting; all on a Linux environment. NoC is maybe one of the biggest changes in the next generation of integrated circuits. It is both as difficult as interesting because it involves networking field inside communications on chip, where things are much more complicated.
Lately, I have been writing some posts to summarize my thesis work, so you are able to take a look to the articles at my NoC blog.
Do not hesitate to contact me if you need my NoC simulator code or more info about it.

El fin de la pobreza

Genial este vídeo de Hans Rosling, que he descubierto vía el fantástico blog de "Un Gaditano en Silicon Valley".

Recomiendo a todos que lo vean. Ayuda a resumir un gran libro "El fin de la pobreza" en 20 minutitos:

Sunday

Visión de la cumbre Europea

"Cuando en 1986 España ingresó en la Unión Europea (entonces Comunidad Económica Europea), recuerdo que proliferó una pegatina con el siguiente lema: Mi país, Europa. Lo que ha pasado este fin de semana en Bruselas es el fracaso de esa idea: Europa como país. No habrá Constitución, ni símbolos, ni puesto en la ONU. Se impone el cortoplacismo y el provincianismo nacionalista de Polonia (...)" "La catoliquísima Polonia ha contribuido con ahínco a minar el sueño de millones de ciudadanos. La musulmana Turquía nunca hubiera hecho algo así."

Ni una coma añado a la magnífica visión de la cumbre de Toño Fraguas.

No se pierdan el texto completo.