March 20, 2013
On the heel of Pew’s new report on the “State of the Media 2013,” there’s been a good bit of hand-wringing over the future of journalism in general, and of newspapers in particular. And not without reason: in 2012 newspapers lost $13 dollars in print ads for every $1 dollar they’ve gained online (ads and subscriptions combined). And that’s had a sad but understandable effect; as Pew reports, “estimates for newspaper newsroom cutbacks in 2012 put the industry down 30% since its peak in 2000 and below 40,000 full-time professional employees for the first time since 1978.”
But as Matt Yglesias argues at Slate, what’s tough on the industry we’ve known might not be so bad for the society it’s there to serve. The pessimism is…
…not wrong, exactly, but it is mistaken. It’s a blinkered outlook that confuses the interests of producers with those of consumers, confuses inputs with outputs, and neglects the single most important driver of human welfare—productivity. Just as a tiny number of farmers now produce an agricultural bounty that would have amazed our ancestors, today’s readers have access to far more high-quality coverage than they have time to read.
Just ask yourself: Is there more or less good material for you to read today than there was 13 years ago? The answer is, clearly, more…
In any case, it’s worth remembering that the future of newspapers has been a subject of contemplation for over a century… and as Smithsonian‘s Paleofutures blog reminds us, of predictions that have rarely been right.
Many of us here in the 21st century like to think of the newspaper as this static institution. We imagine that the newspaper was born many generations ago and until very recently, thrived without much competition. Of course this is wildly untrue. The role of the newspaper in any given community has always been in flux. And the form that the newspaper of the future would take has often been uncertain.
In the 1920s it was radio that was supposed to kill the newspaper. Then it was TV news. Then it was the Internet. The newspaper has evolved and adapted (remember when TV news killed the evening edition newspaper?) and will continue to evolve for many decades to come.
Visions of what newspapers might look like in the future have been varied throughout the 20th century. Sometimes they’ve taken the form of a piece of paper that you print at home, delivered via satellite or radio waves. Other times it’s a multimedia product that lives on your tablet or TV…
Visit “The Newspaper of Tomorrow: 11 Predictions from Yesteryear” for an instructively humbling trip back to the future.
March 9, 2013
A guest post from (Roughly) Daily…
Readers experience DRM– digital rights management– everyday, as a feature of the software they use and the entertainment they consume; it turns out that one doesn’t buy the services and experiences one thinks one’s buying; one rents them– on restrictive terms specified by the provider. Those providers take their rules very seriously indeed: they monitor their customer’s behavior for transgressions, sue their customers whenever they suspect a violation (c.f., here and here, for instance), and work surreptitiously with governments to extend their controls abroad (e.g., here).
Their success-to-date hasn’t gone unnoticed by those selling atoms as opposed to bits. Monsanto, for example, patents its seeds and licenses them to farmers, so that those farmers can’t use the seeds from their crops to replant– as for centuries they have– they must repurchase (or relicense). And like the litigious software and entertainment giants, Monsanto aggressively protects its interest through law suits.
Where might all of this end? A group of eight designers competing in The Deconstruction, gave us a peak:
The DRM Chair has only a limited number of use before it self-destructs. The number of use was set to 8, so everyone could sit down and enjoy a single time the chair.
A small sensor detects when someone sits and decrements a counter. Every time someone sits up, the chair knocks a number of time to signal how many uses are left. When reaching zero, the self-destruct system is turned on and the structural joints of the chair are melted…
[TotH to Hexus]
As we decide to stand up, we might recall that, while dentures date back (at least) to the Etruscans circa 7,000 BCE, it was on this date in 1822 that Charles M. Graham of New York City received the first US patent for artificial teeth.
Filed in Competition and Industry Structure, Driving Forces, Economic, Information Industry, Media and Entertainment, Political, Scenario Planning, Social, Technological
Tags: copyright, dentistry, dentures, drm, DRM chair, history of dentures, humor, intellectual propery, IP, patent
October 22, 2012
Readers of this blog will know that I am concerned about rapacious and reactionary corporate attitudes to intellectual property rights– see, e.g., “Patently Absurd…,,” ”Caution! Pile up ahead…,” or “I was aiming for my foot, but I seem to have shot myself in the thigh…,” all posts that focus on the dangers of (and to) incumbents who substitute extended rights for innovation; enforcement, for service. It’s cold comfort to read confirmation of that concern in Simon Phipps’ report on the economic impact of patent trolls– the logical extension of the problem: corporations that exists only to exploit patents– in Infoworld…
…Among the other measures in the America Invents Act, passed by Congress about a year ago, section 34 requires the nonpartisan Government Accountability Office (GAO) to conduct a study on the effects of patent trolls on the economy. The GAO in turn went to a group of academics associated with the Stanford IP Clearinghouse (now called Lex Machina) to gather the data required. Those academics have now supplied the data to the GAO and published their own assessment of the research. Covering a five-year period from 2007 to 2011, it rigorously identifies and classifies patent activities across all industries and uses a statistically significant sample to draw conclusions.
The findings should concern us all. Coining the useful term “patent monetization entity” (as a replacement for “patent troll,” “nonpracticing entity,” and “patent assertion entity” — all terms with either social or technical issues), the scholars have concluded that “lawsuits filed by patent monetizers have increased significantly over the five-year period.” Not only has the number of cases increased, but so has the proportion of these non-product-related litigants, from 22 percent to 40 percent of cases filed. They found that four of the top five patent litigants in America exist solely to file lawsuits.
This is the tip of the iceberg. Among their findings, the academics analyzing the Lex Machina data observed that many cases never reached court, and the main impact of patent monetization entities was probably in the costs they impose way before litigation commences. This is supported by a paper from the Congressional Research Service [pdf], which observes that the main goal of patent monetizers is to extract money from their victims without ever going to court.
The vast majority of defendants settle because patent litigation is risky, disruptive, and expensive, regardless of the merits; and many PAEs set royalty demands strategically well below litigation costs to make the business decision to settle an obvious one.
What’s going on here? One clue comes from the Lex Machina research. They found that technology industry cases constitute 50 percent of all patent suits; in the software industry, Internet-related patents were litigated 7.5 to 9.5 times more frequently than non-Internet patents. When cases actually go to court, they are often unsuccessful, but most lawsuits from patent assertion entities are settled out of court.
Combine that with the evidence that the unseen menace, when threats lead to payments under nondisclosure terms so as to avoid expensive litigation, and the implication grows that this is an abuse of an out-of-date system manifesting itself. It will then come as no surprise that 1 in 6 patents today covers smartphones. Guess what those patent monetization entities want to monetize?…
Just last week, speaking to the London paper Metro, Amazon’s Jeff Bezos (a man who has been known himself to use intellectual property as a weapon) lamented,
Patents are supposed to encourage innovation and we’re starting to be in a world where they might start to stifle innovation. Governments may need to look at the patent system and see if those laws need to be modified because I don’t think some of these battles are healthy for society.
Filed in Competition and Industry Structure, Driving Forces, Economic, Entrepreneuring, Information Industry, Media and Entertainment, Political, Scenario Planning, Social, Technological
Tags: competitiveness, copyright, economics, intellectual property, patent trolls, patents
September 5, 2012
As Apple revels in its recent victory over Samsung, demanding that a range of Samsung’s “look-like-Apple’s” products be banned from sale in the U.S., let us ponder Apple’s own precedents… in particular, its debt to the great Dieter Rams and his range of designs for Braun.
August 24, 2012
For reasons elaborated here, here, and here (among other places), I’m very worried that the efforts of incumbent media giants to extend copyright lengths, expand “enforcement” efforts, and escalate “infringement” penalties will (further) discourage real innovation– and thus real and sustainable economic growth. That these moves are ultimately self-defeating– and kind of suicide– is certainly ironic; but it’s no consolation.
Of late there’s been some good news; consumers in the U.S. and regulators in the E.U. have said “no” to repressive (and in some cases, unconstitutionally-intrusive) grabs by the media industry: SOPA seems dead; ACTA is stalled (with luck, forever)…
But now there’s PPT- The Pan Pacific Partnership. A powerful agreement that is being secretly negotiated between 9 countries, the United States, Australia, Peru, Malaysia, Vietnam, New Zealand, Chile, Singapore, and Brunei. Mexico, Canada, & Japan are in the process of joining it. The Obama Administration (which has been disappointingly complicit with the Legacy Media Agenda– see here and here, for example) is selling PTT as a trade agreement, aimed at easing commerce among the signatories. (It likely also figures into the Administration’s thinking as a balance against The ASEAN–China Free Trade Area [ACFTA], to which China is central…)
As you can see in the State Department’s pitch for the treaty, it does have some potential for encouraging smoother trade among the signatories. But as you can see in this EFF analysis, there is a pretty vicious wolf under that sheepish clothing:
…The TPP will rewrite the global rules on IP enforcement. All signatory countries will be required to conform their domestic laws and policies to the provisions of the Agreement. In the U.S. this is likely to further entrench controversial aspects of U.S. copyright law (such as the Digital Millennium Copyright Act’s broad ban on circumventing digital locks and frequently disproprotionate statutory damages for copyright infringement) and restrict the ability of Congress to engage in domestic law reform to meet the evolving IP needs of American citizens and the innovative technology sector. The recently leaked U.S. IP chapter also includes provisions that appear to go beyond current U.S. law. This raises significant concerns for citizens’ due process, privacy and freedom of expression rights…
The details make pretty chilling reading… and when you note that, while there are (so far) only 9 signatories, the treaty will dictate the terms of any bi-lateral trade agreements that any of the 9 enter– so the the effective foot-print would be global.
Copyright– and the larger notion of intellectual property– was rightly enshrined in The Constitution (Article I, Section 8, Clause 8). But the Framers saw the importance of balance– of moderating the length and scope of protection– since their purpose was “To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.” Sadly, the purpose of copyright law has, over the years; with each revision that’s made, become less about encouraging new innovation than about protecting the old, the status quo… and that is no recipe for growth.
Only one thing is impossible for God: To find any sense in any copyright law on the planet.
- Mark Twain
July 28, 2012
Moore’s Law– Intel co-founder Gordon Moore’s assertion that the number of transistors on integrated circuits doubles approximately every two years*– is one of the best-known axioms of our time, a rule of thumb that helps explain the explosion of technological capability over the last several decades, at the same time that it reassures us of advances-to-come. It’s attained this status the old-fashioned way: by being largely right– which is to say reasonably accurately predictive.
But as IEEE Spectrum reports, recent research at the Santa Fe Institute suggests that the broader concept on which Moore’s law was founded, the Experience Curve, is actually a better predictor of technological progress than Moore’s refinement.
Bruce Henderson and BCG tend to get credit for the idea of the Experience Curve (or the Learning Curve)– the notion that the costs of technological items drop with their cumulative production. BCG certainly did make hay with the concept back in the late 60s. But the concept dates back to the 19th Century and the work of German psychologist Hermann Ebbinghaus. Then in 1936, Theodore P. Wright observed the phenomenon in aircraft manufacture (“Factors Affecting the Cost of Airplanes,” Journal of Aeronautical Sciences and ”Learning Curve”, Journal of the Aeronautical Sciences), and coined “Wright’s Law” describing the effect.
Moore’s Law seems to be a special case of Wright’s Law; and in fact, Wright’s Law seems to describe technological evolution a bit better than Moore’s—not just in electronics, but in dozens of industries.
A new Santa Fe Institute (SFI) working paper (Statistical Basis for Predicting Technological Progress, by Bela Nagy, J. Doyne Farmer, Quan M. Bui, and Jessika E. Trancik) compares the performance of six technology-forecasting models with constant-dollar historical cost data for 62 different technologies—what the authors call the largest database of such information ever compiled. The dataset includes stats on hardware like transistors and DRAMs, of course, but extends to products in energy, chemicals, and a catch-all “other” category (beer, electric ranges) during the periods when they were undergoing technological evolution. The datasets cover spans of from 10 to 39 years; the earliest dates to 1930, the most recent to 2009.
It turns out that high technology has more in common with low-tech than we thought. The same rules seem to describe price evolution in all 62 areas.
Read the whole story at “Wright’s Law Edges Out Moore’s Law in Predicting Technology Development.”
* “Two years” became, in common understanding, “18 months” when Moore’s colleague David House revised the estimate to account for faster chips contributing to the the acceleration of further development.
March 17, 2012
Last December, Secretary of State Clinton delivered a rousing speech to a Conference on Internet Freedom held at the Hague in which she hammered what has become a steady theme for her over the last couple of years:
The United States wants the internet to remain a space where economic, political, and social exchanges flourish. To do that, we need to protect people who exercise their rights online, and we also need to protect the internet itself from plans that would undermine its fundamental characteristics.
Sec. Clinton has used this high-ground sentiment to bash repressive regimes, from China to Iran– all good… as far as it goes.
But beyond noting that the focus on repression is painfully selective (e.g., no criticism of Saudi Arabia), the U.S. government is behaving in just the way that Mrs. Clinton condemns: even as her rhetoric rings “freedom blue”– State and the rest of the government has pushed repressive treaties like ACTA and domestic legislation like SOPA and PIPA, any/all of which threaten free speech (and promise to retard innovation).
…Under construction by contractors with top-secret clearances, the blandly named Utah Data Center is being built for the National Security Agency. A project of immense secrecy, it is the final piece in a complex puzzle assembled over the past decade. Its purpose: to intercept, decipher, analyze, and store vast swaths of the world’s communications as they zap down from satellites and zip through the underground and undersea cables of international, foreign, and domestic networks. The heavily fortified $2 billion center should be up and running in September 2013. Flowing through its servers and routers and stored in near-bottomless databases will be all forms of communication, including the complete contents of private emails, cell phone calls, and Google searches, as well as all sorts of personal data trails—parking receipts, travel itineraries, bookstore purchases, and other digital “pocket litter.” It is, in some measure, the realization of the “total information awareness” program created during the first term of the Bush administration—an effort that was killed by Congress in 2003 after it caused an outcry over its potential for invading Americans’ privacy.
But “this is more than just a data center,” says one senior intelligence official who until recently was involved with the program. The mammoth Bluffdale center will have another important and far more secret role that until now has gone unrevealed. It is also critical, he says, for breaking codes. And code-breaking is crucial, because much of the data that the center will handle—financial information, stock transactions, business deals, foreign military and diplomatic secrets, legal documents, confidential personal communications—will be heavily encrypted. According to another top official also involved with the program, the NSA made an enormous breakthrough several years ago in its ability to cryptanalyze, or break, unfathomably complex encryption systems employed by not only governments around the world but also many average computer users in the US. The upshot, according to this official: “Everybody’s a target; everybody with communication is a target.”…
Read the full article– and you should read the full article– here.
February 22, 2012
A guest post from (Roughly) Daily…
…Imagine you’re a new parent at 30 years old and you’ve just published a bestselling new novel. Under the current system, if you lived to 70 years old and your descendants all had children at the age of 30, the copyright in your book – and thus the proceeds – would provide for your children, grandchildren, great-grandchildren, and great-great-grandchildren.
But what, I ask, about your great-great-great-grandchildren? What do they get? How can our laws be so heartless as to deny them the benefit of your hard work in the name of some do-gooding concept as the “public good”, simply because they were born a mere century and a half after the book was written? After all, when you wrote your book, it sprung from your mind fully-formed, without requiring any inspiration from other creative works – you owe nothing at all to the public. And what would the public do with your book, even if they had it? Most likely, they’d just make it worse.
No, it’s clear that our current copyright law is inadequate and unfair. We must move to Eternal Copyright – a system where copyright never expires, and a world in which we no longer snatch food out of the mouths of our creators’ descendants…
A bold idea such as Eternal Copyright will inevitably have opponents who wish to stand in the way of progress. Some will claim that because intellectual works are non-rivalrous, unlike tangible goods, meaning that they can be copied without removing the original, we shouldn’t treat copyright as theft at all. They might even quote George Bernard Shaw, who said, “If you have an apple and I have an apple and we exchange these apples then you and I will still each have one apple. But if you have an idea and I have an idea and we exchange these ideas, then each of us will have two ideas.”…
Certainly we wouldn’t want to listen to their other suggestions, which would see us broaden the definition of “fair use” and, horrifically, reduce copyright terms back to merely a lifetime or even less. Not only would such an act deprive our great-great-grandchildren of their birthright, but it would surely choke off creativity to the dark ages of the 18th and 19th centuries, a desperately lean time for art in which we had to make do with mere scribblers such as Wordsworth, Swift, Richardson, Defoe, Austen, Bronte, Hardy, Dickens, and Keats.
Do we really want to return to that world? I don’t think so.
As we return to our senses, we might recall that it was on this date in 1632 that Galileo Galilei “published” Dialogue Concerning the Two Chief World Systems (Dialogo sopra i due massimi sistemi del mondo)– that’s to say, he presented the first copy to his patron, Ferdinando II de’ Medici, Grand Duke of Tuscany. Dialogue, which compared the heliocentric Copernican and the traditional geo-centric Ptolemaic systems, was an immediate best-seller.
While there was no copyright available to Galileo, his book was published under a license from the Inquisition. Still, the following year it was deemed heretical and listed in the Catholic Church’s Index of Forbidden Books (Index Librorum Prohibitorum); the publication of anything else Galileo had written or ever might write was also banned… a ban that remained in effect until 1835.
Filed in Competition and Industry Structure, Economic, Information Industry, Media and Entertainment, Political, Scenario Planning, Social, Technological
Tags: censorship, copyright, Dialogue Concerning the Two Chief World Systems, eternal copyright, Galileo, Index of Forbidden Books, intellectual property, satire
February 18, 2011
Good news and bad… well, frustrating news on U.S. broadband strategy…
Yesterday the Commerce Department released a map detailing the availability of high-speed internet access around the U.S.
larger interactive version, here
There’s lots to explore (right down to the broadband options available on one’s own block); and there are lots of patterns that appear when one does. But surely the biggest lesson that leaps out is the digital divide that still remains: just over two-thirds of American households (68%) have broadband access; the remain third don’t… and that unconnected third? It’s “concentrated” where Americans aren’t.
As Fast Company observes
In America’s rural areas, the internet barely exists as you and I know it: People can’t get broadband in their house; they use dial-up modems at home; and the only place they can hope to watch a YouTube video is the local library.
The Obama administration has dedicated $7.2 billion in stimulus money to fix the problem– and frankly, it’s low-hanging fruit: a relatively straight-forward (and at the price, cheap) way to boost economies in areas left out of tech boom of the last couple of decades, but nonetheless hammered by the crash that followed.
“This is like electricity was. This is a critical utility.”
“You often hear people talk about broadband from a business development perspective, but it’s much more significant than that,” Mr. Depew added. “This is about whether rural communities are going to participate in our democratic society. If you don’t have effective broadband, you are cut out of things that are really core to who we are as a country.”
Affordable broadband service through hard wiring and or cellular phone coverage could revolutionize life in rural parts of the country. People could pay bills, shop and visit doctors online. They could work from home and take college classes.
The National Broadband Map is an important step toward an important goal. But it also sounds, for me anyway, a cautionary note…
As PC Magazine reports
Funds for map were provided through the 2009 Recovery Act, President Obama’s economic stimulus package. That legislation provided $350 million for the creation of a national broadband inventory map. Of that, the National Telecommunications and Information Administration (NTIA) doled out $293 million in grants to all 50 states, the territories, and Washington, D.C., which was used to collect the data for the existing map and will be used over the next five years for updates. Another $20 million was provided to the FCC, which used most of it on contractors who built the map.
All told, the five-year cost of the map is about $200 million, said Larry Strickling, assistant secretary of NTIA within Commerce.
As readers of this blog and of (Roughly) Daily will know, I’m a big fan of data visualization, and a true believer in it’s power to clarify and motivate. The national Broadband Map is a cool tool.
But $200 million? There was presumably no census-taker-like personal visitation involved, no need to wander the country with signal meters. The data that’s collected is inconveniently segregated on vendors’ sites, but readily available on-line; it just needed to be aggregated. Compare the National Broadband map to Census Bureau’s Interactive Census Bureau Participation Map or to The New York Times‘ Immigration Explorer or to any number of examples one can find at Flowing Data… all attacking different illustrative problems, all with different data aggregation challenges– but none of them costing, I’d bet, even 1% of the $200,000,000 that went to chart broadband access (most of them, probably not even 1% of the one percent). This project has taken over a year; compare the result to, say, the interactive map of election results that ran the day after the 2010 national elections (at NPR, where it is extremely safe to assume that the cost ran into the thousands, as opposed to the hundreds of millions, of dollars). Slashdot‘s item on the map was headed “from the this-took-a-fifth-of-a-billion-dollars-to-determine? dept.” Indeed.
I understand that the National Broadband map was part of the Stimulus Package, and that creating jobs was an immediate goal– one that can make for a less-than-optimally efficient process. And I understand that the $200 million is the 5-year cost; it’s taken a year to get this far. But the point of the $7.2 Billion is to increase access for unplugged Americans– and to do it as quickly as possible. Building out that infrastructure will be irreducibly expensive, involving as it will fiber, transmitters, and the labor to install them. And it will take time.
As it is, we’re well over a year into the effort, we have a map the cost of which reduced the $7.2 billion that was set aside to help bridge the digital divide down to $7 billion– and disenfranchised Americans still can’t see it.
Filed in Competition and Industry Structure, Driving Forces, Economic, Information Industry, Political, Scenario Planning, Social, Technological
Tags: 2009 Recovery Act, broadband, broadband strategy, Center for Rural Affairs, Commerce Department, data visualization, FCC, infographics, national broadband map, National Telecommunications and Information Administration, NTIA, rural broadband, stimular package, stimulus, visualization
February 9, 2011
Our friends north of the border have decided that they’re just not going to take it any more– beyond the damage to Canada’s educational and economic prospects threatened by punitively-high bandwidth costs, it’s just plain insulting– and the pressure consumers/citizens are exerting is being to promise positive effects…
We in the U.S. would do well to pay close attention: even as the Canadian authorities are reconsidering, the Washington Post reports, U.S. Internet service providers are moving, with the blessing of federal regulators, to metered plans.
Bandwidth does, of course, have a cost; wired ISPs like Comcast and ATT make huge capital investments building and maintaining their “last mile” network of residential cable and fiber, for which they charge us monthly connection fees: fixed fees to cover fixed costs. Their cost to deliver a marginal gigabyte (about an hour of viewing something like Netflix’s streaming service), to the consumer is something less than a penny– and dropping. Pay-per-gigabyte metering, when it’s on top of already-high (to put it generously) monthly base fees, simply doesn’t make sense. Indeed, at the rates being proposed it’s as insulting down here as it is in the Great White North.