“Silence is dry; sound is wet. Volume is the mass of sound. In silence you can hear people think, but only when their bodies stop making noises. But who cares what people think? The noises their bodies make are more interesting anyway. Listen to your body. Talk to plants. Ignore people.”—Unknown (via q-uote)
Some time ago I stumbled upon a post on n-Category Café (which is, by the way, a blog that is definitely worth reading); here is part 1 and part 2.
The post introduces the concepts of entropy and diversity (and cardinality) in the field of biology. While I’m acquainted with the definition of entropy in information theory (and, slightly less, physics), I found really beautiful that those concepts could be applied in a very elegant way in population biology.
The first part applies the entropy definition to a finite probability space, representing an ecosystem with several species. Each probability shows how a particular species is frequent, namely the probability to encounter an indivual of that species. The entropy represent the diversity of the ecosystem, or, in a very rough language, how in “a good state” is the ecosystem.
The second part extends to a finite probability metric space. In this space, the probability has the same meaning as before. The metric distance among the points in the space grasps the idea of how a species is similar to one another. Thus, a low distance between two species means that these two are quite similar.
From the diversity you could derive the entropy and the cardinality of a probability (metric) space.
Intrigued by the post, I wrote a small ruby gem to calculate the entropy, the diversity and the cardinality for a probability space with or without a metric. The gem is on rubygems.org,entropy_gem, and the source code is on github.
I’m wondering if, passing from information theory to population biology to computer science again, these concepts could be applied in the field of machine learning, perhaps in genetic programming or any algorithm inspired by the mechanisms on evolutionary biology.
Tell me if you have any suggestion, comment or correction.
I recently bought a Guide10 Adventure Kit solar panel from Goal0. The kit includes a Nomad7 solar panel and a battery pack with 4 AA NiMh batteries.
Since I needed it in time for the Chaos Communication Camp, and didn’t found a vendor in Europe, this was a good chance to test Borderlinx, a forwarding service that provides you an US address if you live outside USA. So here are my opinions about both the solar panel and the forwarding service.
Goal0 Nomad7 Solar Charger
Pros: The solar panel can charge directly your smarphone or the battery pack. It has an USB 5V output and a 12V output with a standard car charger socket. The battery pack includes 4 AA NiMh batteries, but you can use any NiMh battery, and an adapter to charge the AAA type. The battery pack can then be used to charge the smartphone by switching it to output mode.
Cons: I wished it had included some adapters, expecially for old phone that require a charger. You can you a car charger, tough, if you have one, but that forces you to use the direct 12V output of the solar panel, instead of the 5V battery pack output.
Bottom line: So far the products seems very good. I used the battery pack to charge my Nexus S, and the solar panel to charge the battery pack, multiple times. The solar panel is water resistant and light, I can recommend it for backpacking/camping.
The service of Borderlinx is operated by DHL. It provides a US address, and as a bonus a UK address. You can use this address as a shipping address when you buy from an online shop. Borderlinx then sends your package to your address.
Pros: Creating an account is quite easy, it doesn’t require a credit card. Borderlinx has a retention period of 30 days, during this time you can accumulate your goods, and then send them in an unique shipping, slightly saving some cents. The service calctulates for you all the shipping cost, duty and taxes, so you know almost exactly what you’re going to pay and you don’t have to deal with duty forms and taxes. It offers standard and express delivery. Plus, most online shops offer free shipping inside US.
Cons: the service isn’t exaclty cheap.
Bottom line: the shipping was fast and the service overall experience was good, but not cheap. I think it can be used to buy things that you can’t’ find in your country (but beware that some US vendors don’t ship to addresses provided by services like Borderlinx), but for a more casual shopping you should evaluate with attention the cost and whether it is worth.
A nice introduction to collective intelligence, and why sometime it fails.
The good news is that the wisdom of crowds exists. When groups of people are asked a difficult question—say, to estimate the number of marbles in a jar, or the murder rate of New York City—their mistakes tend to cancel each other out. As a result, the average answer is often surprisingly accurate.
But here’s the bad news: The wisdom of crowds turns out to be an incredibly fragile phenomenon. It doesn’t take much for the smart group to become a dumb herd. Worse, a new study by Swiss scientists suggests that the interconnectedness of modern life might be making it even harder to benefit from our collective intelligence.
This week marks 50 years since Yuri Gagarin climbed into his space ship and was launched into space. It took him just 108 minutes to orbit Earth and he returned as the World’s very first space man.
To mark this historic flight we have teamed up with the astronauts onboard the International Space Station to film a new view of what Yuri would have seen as he travelled around the planet.
Weaving these new views together with historic voice recordings from Yuri’s flight and an original score by composer Philip Sheppard, we have created a spellbinding film to share with people around the World on this historic anniversary.
Watch First Orbit right now on YouTube. And if you enjoy it then do check out our next film. We need your help to make it.
I’m a newbye runner. Last winter I started the C25k program and now I’m able to steadily run 5 km (I won’t disclose the time though☺), so I searched for an Android App to record the time spent, the route I took and a few other stats.
After a brief search I restricted the set of candidates to three applications, namely Cardiotrainer, Endomondo and Runkeeper Pro, and tried them all. The first two are available in the Android Market both as a free version and as paid enhanced version (I bougth them both and here I’m evaluating the paid version); the last one is free.
Here are a few pros and cons I found for each app:
the app has probably more features than the other two;
stable, easy to use;
good autopause if you stop during your workout;
integrated with other applications to check the progress of your weight, if you want to decrease it, and to log the calorie intake of your meals.
I don’t like the website where you upload your workouts. It has a small number of statistics and features;
it’s the most expensive, but with the paid version you get any of the other integrated apps for free (race module, weight progress, etc.);
you can’t set a goal for a workout, except in the race mode, but after the race completes it stops to record the workout.
the website has a good number of stats and featurs, and a sort of “social” twist. You can share your workouts with friends, comment on others’ workouts, and so on
stable, easy to use;
you can set a goal for a workout (time or distance);
you can control the app with the headset (I’ve not yet tried that);
developed by a european crew, it’s the only one that distinguishes between cycling as a sport and cycling as transportation☺.
you can’t vary the notification interval during the workout, it’s fixed on every km;
the autopause sometimes fires even if you’re running. I suspect it has something to do with GPS tracking.
it has the most useful set of features for runners: notification for time and distance (and both active at the same time), with configurable interval;
you can edit the workout and modify it, thus for example you can set a goal of 30 min run with 5 min slow, 5 min fast, etc, and choose the number of repetitions;
good website, I specially like the possibility of finding local users of the app in your zone. Has a fair set of stats and you can dowload predefined training programs to your app, albeit not for free.
The app is free, but you can download some training programs or access all your stats only with a monthly fee; I prefer paying for the app, even a not small amount as for the Cardiotrainer App, instead of paying a monthly fee.
And the winner is…
The Cardiotrainer was the first app I installed, since at first glance seemed the most used. Now I appreciate the stats you can collect on your workouts on Endomondo’s and Runkeepers’ websites. You can easily switch from one app to another, for all three apps support the import/export of workouts in gpx format.
I suspect the Runkeeper it’s the most useful if you’re a serious runner, but Endomondo comes pretty close. Right now I chose to use Endomondo.
Once in a while a mutation of bacteria resistant to new kinds of antibiotic drugs surfaces, both in developed countries, where the medical standards are supposed to be higher, or in developing countries where the spreading of disease is easier.
The last of such occurence is reported in the April issue of Scientific American and summarized here.
Every time I read of the discover of so called “super” bacteria resistant to a “super” new drug just developed by a pharma comp, I recall a story I read about a year ago: Norway found a simple solution by cutting the use of antibiotics. People do not have a chance to develop resistance to antibiotics, because Norwegian doctors prescribe fewer antibiotics than any other country.
(I want to thank Bobby Kleinberg for bringing this to my attention.)
Consider the following voting scheme
Choose a random person A1.
A1 chooses a set at random of 30 people. Call the set A2.
Choose a random set of 9 from the 30 in A2. Call this set A3.
The members of A3 pick a set of 40 people. This is NOT random. In fact, every person they choose must be approved by at least 7 of the 9. Call this set of 40 A4.
Choose a random set of 12 from the 40 in A4. Call this set A5.
The members of A5 pick a set of 25 people. This is NOT random. In fact, every person chosen must be approved by at least 9 of the 12.
Choose a random set of 9 from the 25 people in A5. Call this set A6.
The members of A6 pick a set of 45 people. This is NOT random. In fact, every person chosen must be approved by at least 7 of the 9.
Choose a random set off 11 from the 45 people.
These 11 chose a final set of 41. They do this by every member choosing a candidate which they may examine in person. The candidates with the most approvals are picked.
THESE 41 chose the WINNER - but the winner had to get at least 25. (It is not clear if any of them could be the winner.)
Which of the following is true?
This is a real scheme that was really used.
This scheme was part of a BREAKTHROUGH!!!! result.
This scheme is a counterexample to a conjecture about voting schemes.
This scheme (with parameters) is an example of a voting scheme that is NP-hard to manipulate.
I would have guessed that it is a contrived scheme to serve as a counterexample, but NO- this scheme was really used to pick the new doges of Venice from 1268 until roughly 1768. Why so complex? To avoid anyone rigging the election. You can read more about it here. I suspect it would be hard to manipulate, though I don’t think it is known to be NPC to manipulate.
Why did this come up? Bobby Kleinberg gave a talk at UMCP where he brought it up to show that his results (about how randomness can help make mechanisms hard to manipulate) had a real world counterpart. See here for his paper, which has co-authors Jason Hartline and Azarakhsh Malekian.
In the American system of Presidential elections, GFW’s question could equally be asked; why bother spending $700 million to win an election when it would be cheaper to buy electoral votes for a million dollars each?
On March 15th, an HTTPS/TLS Certificate Authority (CA) was tricked into issuing fraudulent certificates that posed a dire risk to Internet security. Based on currently available information, the incident got close to — but was not quite — an Internet-wide security meltdown. As this post will explain, these events show why we urgently need to start reinforcing the system that is currently used to authenticate and identify secure websites and email systems.
There is a post up on the Tor Project’s blog by Jacob Appelbaum, analysing the revocation of a number of HTTPS certificates last week. Patches to the major web browsers blacklisted a number of TLS certificates that were issued after hackers broke into a Ceritificate Authority. Appelbaum and others were able to cross-reference the blacklisted certificates’ serial numbers against a comprehensive collection of Certificate Revocation Lists (these CRL URLs were obtained by querying EFF’s SSL Observatory databases) to learn which CA had been affected.
The answer was the UserTrust “UTN-USERFirst-Hardware” certificate owned by Comodo, one of the largest CAs on the web. Comodo has now published a statement about the improperly issued certs, which were for extremely high-value domains including google.com, login.yahoo.com and addons.mozilla.org (this last domain could be used to trojan any system that was installing a new Firefox extension, though updates to previously installed extensions have a second layer of protection from XPI signatures). One cert was for “global trustee” — not a domain name. That was probably a malicious CA certificate that could be used to flawlessly impersonate any domain on the Web.
Comodo also said that the attack came primarily from Iranian IP addresses, and that one of the fraudulent login.yahoo.com certs was briefly deployed on a webserver in Iran.1
What should we do about these attacks?
Discussing problems with the revocation mechanisms that should (but don’t) protect users who don’t instantly get browser updates, Appelbaum makes the following assertion:
If the CA cannot provide even a basic level of revocation, it’s clearly irresponsible to ship that CA root in a browser. Browsers should give insecure CA keys an Internet Death Sentence rather than expose the users of the browsers to known problems.
Before discussing whether or not such a dramatic conclusion is at all warranted, it is worth considering what the consequences of blacklisting Comodo’s UserTrust CA certificate would have been. We used the SSL Observatory datasets to determine what had been signed by that CA certificate. The answer was that, as of August 2010, 85,440 public HTTPS certificates were signed directly by UTN-USERFirst-Hardware. Indirectly, the certificate had delegated authority to a further 50 Certificate Authorities, collectively responsible for another 120,000 domains. In the event of a revocation, at least 85,000 websites would have to scramble to obtain new SSL certificates.
The situation of the 120,000 other domains is more complicated — some of these are cross-certified by other root CAs or might be able do obtain such cross-certifications. In most — but not all — cases, these domains could continue to function without updating their webserver configurations or obtaining new certs.
The short answer, however, is that the Comodo’s USERFirst-Hardware certificate is too big to fail. If the private key for such a CA were hacked, by the Iranians or by anybody else, browsers would face a horrible choice: either blacklisting the CA quickly, causing outages at tens or hundreds of thousands of secure websites and email servers; or leave all of the world’s HTTPS, POP and IMAP deployments vulnerable to the hackers for an extended period of time.
Fortunately, Comodo has said that the master CA private keys in its Hardware Security Modules (HSMs) were not compromised, so we did not experience that kind of Internet-wide catastrophic security failure last week. But it’s time for us to start thinking about what can be done to mitigate that risk.
Cross-checking the work of CAs
Most Certificate Authorities do good work. Some make mistakes occasionally,2 but that is normal in computer security. The real problem is a structural one: there are 1,500 CA certificates controlled by around 650 organizations,3 and every time you connect to an HTTPS webserver, or exchange email (POP/IMAP/SMTP) encrypted by TLS, you implicitly trust all of those certificate authorities!
What we need is a robust way to cross-check the good work that CAs currently do, to provide defense in depth and ensure (1) that a private key-compromise failure at a major CA does not lead to an Internet-wide cryptography meltdown and (2) that our software does not need to trust all of the CAs, for everything, all of the time.
For the time being, we will make just one remark about this. Many people have been touting DNSSEC PKI as a solution to the problem. While DNSSEC could be an improvement, we do not believe it is the right solution to the TLS security problem. One reason is that the DNS hierarchy is not trustworthy. Countries like the UAE and Tunisia control certificate authorities, and have a history of compromising their citizens’ computer security. But these countries also control top-level DNS domains, and could control the DNSSEC entries for those ccTLDs. And the emergence of DNS manipulation by the US government also raises many concerns about whether DNSSEC will be reliable in the future.
We don’t think this is an unsolvable problem. There are ways to reinforce our existing cryptographic infrastructure. And building and deploying them may not be that hard. Look for a blog post from us shortly about how we should go about doing that.
1. This is strong circumstantial evidence that the attack was perpetrated by Iranians, though it also possible that the perpetrators used compromised systems in Iran in order to frame Iran.
3. These numbers are from the SSL Observatory. Before we performed those scans, we are unsure that anybody knew how many CAs were trusted by our browsers and operating systems, because CAs regularly delegate authority to subordinate CAs without announcing this publicly
Network neutrality is the idea that your cellular, cable, or phone internet connection should treat all websites and services the same. Big companies like AT&T, Verizon, and Comcast want to treat them differently so they can charge you more depending on what you use.
The Federal Communications Commission (FCC) is currently debating legislation to define limits for internet service providers (ISPs). The hope is that they will keep the internet open and prevent companies from discriminating against different kinds of websites and services.
The 0.9.0 release of HTTPS Everywhere is a new beta version designed to offer improved protection against Firesheep. Most notably, it can provide much better protection for Facebook, Twitter and Hotmail accounts, as well as completely new protection for bit.ly, Dropbox, Amazon AWS, Evernote, Cisco and Github.