followthemedia.com - a knowledge base for media professionals | |
|
ftm agenda
All Things Digital /
Big Business /
Brands /
The Commonweal /
Conflict Zones /
Fit To Print /
Lingua Franca /
Media Rules and Rulers / The Numbers / The Public Service / Show Business / Sports and Media / Spots and Space / Write On |
The War Of Words Heats Up Between Google And Newspaper Publishers Wanting To Protect Their Online News CopyGoogle continues to say that robots.txt gives newspaper publishers all the protection they need to stop Google accessing their online news, but the publishers, who have developed their own new coding system to give them more control, are getting ever more angrier that Google won’t play ball.Putting the cat among the pigeons this week was Google’s Rob Jonas, European head of media and publishers partnerships, who told a media meeting in London this week that Google saw no reason to adapt the newly developed Automated Content Access Protocol (ACAP). The new system lets a web site block indexing of specific pages, or an entire site. It extends what was available from the robots. txt command developed in 1994 to block content on a server and the meta robots that were developed to allow page-by-page blockage. Google says that newspapers that don’t want Google picking up their copy, for copyright or other reasons, just have to apply robots.txt, but publishers say robots is just a blocking tool that says either “yes or “no” whereby ACAP communicates automatically with the search engines, telling the search engine robots what they can do with each page of copy – publish it entirely, publish only extracts, or not touch it at all. But if the search engines robots don’t “talk” to ACAP, and Google reconfirmed this week it is not going to, then the system won’t work. And that brought some really strong language from Gavin O’Reiily, President of the World Association of Newspapers and chairman on the media consortium that developed ACAP. “It’s rather strange for Google to be telling publishers what they should think about robots.txt, when publishers worldwide – across all sectors – have already and clearly told Google that they fundamentally disagree. If Google’s reason for not (apparently) supporting ACAP is built on its own commercial self-interest, then it should say so, and not glibly throw mistruths about.” WAN says publishers in 16 countries have started to apply ACAP, but it might be not as many organizations are picking up on the new system as originally hoped, causing O’Reilly to send out a special message to WAN members at the end of February saying, “This is a decisive moment for the Automated Content Access Protocol, the new standard devised by the newspaper, magazine and book industries to protect our digital publishing interests and make us masters of our own content. We have done the hard work, we have defined a new set of rules for working online and now we need to ensure that they become a part of the Internet landscape.” He added, “We started the ACAP project because we knew that we had to take responsibility not just for identifying the problem of managing permissions on the Internet, but also for providing the solution. We have successfully created a new protocol and tested it, and we are moving into the next phase of work. The note had a flavor of “here’s what we have done for you so why aren’t you taking advantage of it” desperation behind it, although a WAN spokesman says not so. “It hasn’t been in the market very long and we’re currently informing publishers about it and making a case for implementation,” the spokesman said. “The response from publishers has been positive – there are arguably different ways one could approach this issue, but there is an industry consensus that something has to be done to address the issue. It isn’t only about Google -- there are hundreds and hundreds of sites that crawl content.” Google maintains it does not see how publishers are hurt by the search engine promoting their news content and sending readers directly to those sites. Publishers take the view they want to decide what material Google may access. To make its point of how Google actually helps newspapers, Jonas says that since last October when the Financial Times opened its site to Google News and made 30 stories a month available to everyone for free, that the site has seen a 75% traffic increase and it has gained an additional 230,000 registered users.Why wouldn’t other publishers want similar opportunities to increase their readership? O’Reilly said that Google was part of “12 months of intensive cross industry consideration and active development” discussing what was wrong with robots.txt, so it seemed a little strange when Jonas told the UK’s Press Gazette there seemed to be a communications problem. Asked whether traditional media consider Google to be the enemy, he said, “The one thing I have learned over the last couple of years is that most of those fears and concerns come from a misunderstanding. If we had time to sit down with them and explain what are our aims we could talk them through our way of doing things. But as it is we can’t really do that. It’s just a lack of detailed understanding over what we are trying to achieve.” This couldn’t have been achieved during the 12 months of “intensive” talks with the ACAP consortium? Jonas emphasized that all Google really does is drive traffic to news web sites around the world, and why should publishers complain about that? An executive for the UK Daily Telegraph’s web site last year put forward Google’s case -- “I want people to find Telegraph content in any way they choose. Be it through Google News, RSS, some obscure map mash-up I’ve never heard of (and need never become aware of), a link from a widget on someone else’s blog, I really don’t care. Come one, come all. The very idea of exclusion is ridiculous to any publisher with an advertising-based model that relies on traffic to pay the bills.” It’s a point the ACAP people still haven’t really answered.
|
||||||
|
copyright ©2004-2007 ftm partners, unless otherwise noted | Contact Us Sponsor ftm |