Wednesday, December 14, 2011

Using Google Public Data


Official Google Public Data Help Center where you can find tips and tutorials on ... How does the Public Data Explorer relate to Trendalyzer and Gapminder ...


The Guardian Datablog: The best way to get value from data is to give it away


The best way to get value from data is to give it away

Tuesday 13 December 2011 18.20 GMT

Yesterday European Vice President Neelie Kroes unveiled a new package of policies related to open data and public sector information
Neelie Kroes making EC open data announcement
Neelie Kroes making EC open data announcement from okfn on Flickr Photograph: Guardian
Last Friday I wrote a short piece on for the Datablog giving some background and context for a big open data big policy package that was announced yesterday morning by Vice President Neelie Kroes. But what does the package contain? And what might the new measures mean for the future of open data in Europe?
The announcement contained some very strong language in support of open data. Open data is the new gold, the fertile soil out of which a new generation of applications and services will grow. In a networked age, we all depend on data, and opening it up is the best way to realise its value, to maximise its potential.
There was little ambiguity about the Commissioner's support for an 'open by default' position for public sector information, nor for her support for the open data movement, for "those of us who believe that the best way to get value from data is to give it away". There were props to Web Inventor Tim Berners-Lee, the Open Knowledge Foundation,OpenSpendingWheelMap, and the Guardian Datablog, amongst others.
But will Brussels walk the walk? What is actually in the package? Two very concrete, more or less straightforward things topped the bill: data and cash. Firstly, the European Commission will lead the way by pioneering open data policies and practises that it would like to see adopted by EU member states. "Eating your own dogfood", as software developers affectionately call it. They will open up documents and datasets from across dozens of institutions - no mean feat, as I'm sure UK Government representatives will have told Neelie when she and her team visited Number 10 earlier this year.
Secondly the Commission will put up €100 million in financial support for research into "data-handling technologies". This will no doubt stimulate cross-border collaboration around tools and technologies that will enable more people, projects, organisations and companies to derive value from data. Hopefully some of this will support some of the wonderful work that is already going on to clean up, harmonise, and expose data to the public - as well as funding the creation of more easy to use open source tools and applications that consume it to help us do more useful things and answer more sophisticated questions.

Tuesday, December 13, 2011

Google Public Data Explorer 2.0: Making Public Data More Accessible on the Web


Making Public Data More Accessible on the Web

12/12/11 | 11:00:00 AM
Last year, we launched the Google Public Data Explorer, an online tool that organizes public statistics and brings them to life with interactive exploration and visualizations. Since then, we’ve added dozens of new datasets and received enthusiastic feedback from users around the world. Several data providers, such as the UN Development Programme and Statistics Catalunya, have even integrated the tool into their web sites.

Today, we’re pleased to announce the next step in our public data effort- a completely revamped product featuring an updated look and feel, improved interaction modes, and a new visualization engine.

Now you can:

1. Search across the data
Our most popular datasets have been accessible through Google Web Search for some time, and this will continue to be the case. Now, however, you can also search within the product, across our extensive corpus of public statistics. This allows you to find data on issues such as global competitivenesspopulation density, or infant deaths. The search page also features a set of sample visualizations and stories, which highlight some of the topics covered by the product.

2. Slice and dice with fewer clicks
Once you’ve selected a dataset, the new exploration UI puts the data front and center. Want to plot “Fertility Rate” instead of “GDP”? Just make a single click in the list to the left of the chart. Interested in the unemployment rate for women as opposed to men? Just as easy. No more digging through pop-ups or settings menus.

3. Access it on any device
Our new charts are built according to open web standards such as HTML5. As a result, they work across all common desktop, tablet, and smartphone configurations, without depending on third-party plugins. We expect the performance and functionality of the charts to improve over time as browser support for HTML5 matures.

Give the new Google Public Data a try, and let us know what you think by posting in our discussion forum.

Monday, December 12, 2011

EUROPA - Press Releases - Digital Agenda: Turning government data into gold

EUROPA - Press Releases - Digital Agenda: Turning government data into gold
Brussels, 12 December 2011 – The Commission has launched an Open Data Strategy for Europe, which is expected to deliver a €40 billion boost to the EU's economy each year. Europe’s public administrations are sitting on a goldmine of unrealised economic potential: the large volumes of information collected by numerous public authorities and services. Member States such as the United Kingdom and France are already demonstrating this value. The strategy to lift performance EU-wide is three-fold: firstly the Commission will lead by example, opening its vaults of information to the public for free through a new data portal. Secondly, a level playing field for open data across the EU will be established. Finally, these new measures are backed by the €100 million which will be granted in 2011-2013 to fund research into improved data-handling technologies.

Friday, December 9, 2011

The Guardian Datablog: Opening Europe's Data


Opening Europe's Data

The European Commission is set to make a major announcement about the future of the Public Sector Information Directive on Monday. Jonathan Gray from the Open Knowledge Foundation discusses what this might mean for open data in Europe

opening europe's data
opening europe's data Photograph: guardian.co.uk
Approximately one in fourteen people on the planet now live in one of Europe's 27 Member States. There are thousands of local, regional and central government bodies in Europe, which collectively disburse billions and billions of euros on behalf of European citizens every year (over€6,182bn in 2010).
Many of these public bodies collect or generate information relevant to their operations - from the timetables of trains or rubbish collection services, to metrics on schools, universities and hospitals, to databases on carbon emissions, weather patterns, or biodiversity. "How much data?" asks a distant, data-hungry cry. While I'm not sure that this is something that any statistics department has got around to measuring nor something that has yet been subjected to brazen guesstimation, I think it's fairly safe to say: lots and lots.
Much of this data is, of course, private information about citizens, and hence should be handled like Plutonium pellets ("Kept in secure containers, handled as seldom as possible and escorted whenever it has to travel"). And some of it shouldn't see the light of day on national security grounds (at least for a while). But much of it is or should be public: free for all to access, use and benefit from. The information that public bodies collect and use for themselves are often relevant to us.

Tuesday, November 29, 2011

ReadWriteWeb: Google+ First among Top 10 Social Web Products of 2011

From: http://www.readwriteweb.com/archives/top_10_social_web_products_of_2011.php

1. Google+

Up till 2011, Google wasn't known for its social networking prowess. Unless you count Orkut, a social network product that became a phenomenon...in Brazil only. At the end of June 2011 that all changed, with the worldwide launch of (in our opinion) the best social network product of the year: Google+.
It was a muddled launch. Back in March, ReadWriteWeb's Marshall Kirkpatrick got the scoop about a new Google product based on a circles concept. At the time, Google vigorously denied the existence of such a product. But lo and behold, Google+ launched over three months later - and its core feature was indeed "circles." With circles you could better segment your friends, something that was a major pain point in Facebook.
Initially Google+ launched to a chorus of media outlets shouting "Facebook killer." However, it soon became apparent that Google+ was going to be most useful to Google as the social component of its entire online product suite: including Google search, Google Reader and YouTube. Although it's a more than useful standalone social network, too. Particularly for topic-focused discussions. Google+ has grown rapidly, attaining over 40 million users in a matter of months - although just how many are active is a contentious point.

Wednesday, November 23, 2011

Could stats benefit from JSON as an exchange format?

From: http://json-stat.org/



JSON-stat =

{
'website': 'json-stat.org',
'motto': 'Could stats benefit from JSON as an exchange format?',
'announcements': '@jsonstat',
'gather': '+JSON-stat',
'hashtag': '#jsonstat'
}
The main statistical standards for data and metadata exchange are XML-based (DDISDMX, or its semantic siblings: SCoVo/SDMX-RDF...): they are usually complicated and verbose: probably not best suited as an exchange format for apps.
Could stats benefit from JSON as an exchange format? How should data and metadata be expressed?
The goal of json-stat.org is to define a JSON schema for statistical dissemination.
Promoter: Xavier Badosa | Last update: 2011-11-23

Friday, November 18, 2011

BBC News - Eurozone debt web: Who owes what to whom?



The circle above shows the gross external, or foreign, debt of some of the main players in the eurozone as well as other big world economies. The arrows show how much money is owed by each country to banks in other nations. The arrows point from the debtor to the creditor and are proportional to the money owed as of the end of June 2011. The colours attributed to countries are a rough guide to how much trouble each economy is in.

Tuesday, November 8, 2011

Simple Office data - low hanging fruit PSI


From mashup.se translated by Google: Simple Office data - low hanging fruit PSI (guest post)

Recently, the spoken part of the government data (eg, SMHI weather data, Land Survey map data and the National Post and Telecom Agency's telephone data) should be made ​​freely available for reuse. The two main arguments is that this enables the development of new services that create an economic value, and that it can increase transparency in the public sector. PSI Directive from the EU and the subsequent Act (2010:566) on the re-use of the public administration , has provided entrepreneurs and transparency zealot high hopes.

It is slow

For several reasons, it is however quite slow. Firstly, not all data in such condition that they are suitable for exposure. It may require both filtering of confidential information, quality processing and infrastructure investment. First, there are sometimes financial interests of the authorities that may seem to speak against the free or cheap APIs. The authorities simply have significant revenues from sales of data and would not release them. Rightly is also questioned the usefulness of publishing all the data transparently. For an already congested authority, it is irresponsible to put resources in an API if no benefit from it!

Simple data with great value

But government data must not be either massive databases or unstructured word processing document. Regulatory Data can also be simple lists, numeric ranges and codes. For example, the list of names and codes for the country's municipalities, or the current base amount.
The advantage of these simple authority data is that usually they are not classified, not unstructured, not very expensive to produce, nor do they stand for any more revenue that would be threatened if they were released.
Nykvarns municipality was formed in 1999 but is still not in the form of the "National Atlas of Sweden"
In fact, many of them are already free. For example, publishes SCB current lists of counties and municipalities , with the codes, in both HTML, PDF and Excel format. But the purpose seems to be to facilitate the electronic recycling via APIs
There are thousands of forms on different websites where the user is expected to choose their local government from a list - a list that is typically filled with content by the developer once and for all when the form is created. Some years later, some municipalities have been merged or changed names or shared on it, and then form the list is no longer current. I know from personal experience that the need for this type of basic data occurs again and again and again. Last about a week ago, I had in my role as a consultant (an authority) a real need of just the current lists of municipalities and counties.
Other examples of simple authority data are all post towns and postcode, all currencies, names of agencies, degrees and professional certificate, area codes, base, consumer prices and of course the list of all countries and country codes. Did you know the country list that is built into Microsoft's development platform. NET has no Cuba, perhaps because the U.S. does not recognize the state?Perhaps it could be a task for the Institute of International Affairs to publish an API method for those wishing to list countries such as Sweden consider to be country ?

Persistent current information

If there was a machine-readable list (XML file or WebService method or the like) at an address that can be expected to be durable and with constantly updated information, it would be fairly easy to build online forms to keep them current. For performance reasons it would usually not be appropriate to retrieve the list each time the form must be presented, but by downloading the list automatically once a day or week or so would the quality be raised in its forms, and related databases.
It is, in itself, with no more effort, to automatically and regularly download the Excel or html file that SCB already publishes and through this bring about what I have outlined above. But for it to be appropriate with an automated clutch, you need to know that the format will not change, the address is permanent and so on. Ideally, the service is available from a trusted actor (preferably agency) over a secure communications protocol as well. And with wise designed version of the sturdy handle changes over time. It is not entirely trivial, but not rocket science. Statistics may not be able to squeeze this in the existing budget, but the Treasury would be a negligible cost compared to other investments of strategic importance to the country's infrastructure.
Can you influence your IT policy to provide any authority that mission? Or is there something better?
About Pär Lannerö 
Co-founder and consultant at Meta-Matrix Ltd , which since its inception in 1999 worked a lot with metadata on the web (or in "the Matrix", as the interconnected global computer networks are sometimes called before a certain movie la seized on the phrase) to achieve greater efficiency and greater transparency.