Over the next week three of the more influential people in the world of the Internet, including two men regarded as the “fathers of the Internet,” will speak in Princeton.

On Thursday, March 6, at 11:30 a.m., Princeton professor Ed Felten will speak at the Princeton Regional Chamber of Commerce’s luncheon at the Princeton Marriott. Felten, founding director of the Center for Information Technology Policy, will speak about digital currency, including Bitcoin. Tickets are $50 for chamber members, $70 for nonmembers. Visit www.princetonchamber.org.

Then on Wednesday, March 12, at 4:30 p.m. at the Friend Center at Princeton University, Vinton Cerf and Bob Kahn, the inventors of TCP/IP protocol, will speak. The event is free, first-come first-served seating. For more information, go online to commons.princeton.edu/kellercenter.

The two giants of Internet technology will take the occasion of the 40th anniversary of the invention of TCP/IP protocol, which stands for “Transmission Control Protocol/Internet Protocol,” to reflect on the evolution of the Internet. Cerf is chief Internet evangelist at Google, and Kahn is CEO of the Corporation for National Research Initiatives based in Virginia.

#b#Vinton Cerf: Your Networked Future#/b#

In the future, everything will be connected to the Internet. And that means everything. Vinton Cerf predicts that in the future, your car, your fridge, your lightbulbs, and even your face will all be connected to the Internet.

There is reason to listen to Cerf, since he is recognized as one of the fathers of the Internet, along with Bob Kahn, for his role in the 1973 creation of the protocol that is used for all Internet communications.

Cerf, the son of an aerospace executive,, earned his B.S. degree in mathematics from Stanford University and — after a few years at IBM — got his Ph.D. at UCLA in 1972. At UCLA he met Bob Kahn, with whom he co-designed the TCP/IP protocols that are the basis for the modern-day Internet.

Cerf, who is chief digital evangelist for Google, sees enormous upside to this “Internet of Things,” as well as potential dangers. Cerf spoke to the Federal Trade Commission on November 19, 2013. What follows is an edited transcript of his remarks:

I’m going to start by giving you a little bit of sense of what the Internet is like today. It looks something like this. And the picture is really colors of different Internet service providers. There are 500,000 Internet service providers now, or more, that make up the global Internet.

What’s interesting is that this is not controlled from the top, this is a completely distributed system. Every one of those Internet service providers has his or her own business model and it could be for profit, not for profit, government, amateur, whatever it is. They run whatever software and hardware they choose to use, they choose to interconnect to people, and there is no dictated requirement for interconnection. There are no rules about whether you pay or don’t pay, whether you peer or not.

This is an entirely collaborative activity and it is global in scope. So it’s really quite astonishing and it has been expanded by RF, radio frequency devices, including wi-fi and all kinds of mobile communications capabilities. I would like to point out to you how interestingly powerful the mobile has turned out to be.

The two things, the Internet and the mobile, mutually reinforce each other’s utility. The mobile allows you access to the Internet at any time, assuming you are within range of a base station, and the Internet allows the mobile to get access to all of the content, all of the computing power, and all of the other functionality of the Internet and the world wide web.

So the two have been very mutually reinforcing and, as you can see, the rapid expansion as a consequence. There are — these are statistics that are probably midyear, slightly under a billion devices on the network. These are devices that have domain names and have fixed IP addresses that you would typically find if you were searching for things. It does not include laptops, desktops, mobiles that are intermittently connected to the network. So the absolute number of Internet-enabled devices could be in the billions, probably 3 or 4 billion devices, maybe not all connected all at the same time.

The number of users, again, is not exactly well-known because there isn’t one place where you have to sign up so that we can keep track, but a reasonable estimate is about 3 billion people. Which means that, as the Internet evangelist, I have 4 billion more people to convert, so I can use help if anybody is interested.

There are on the order of 7 billion mobiles in use, although that does not translate into 7 billion people because a lot of people have more than one. Maybe many of you do. Certainly, in other parts of the world that is the case. Maybe a billion-and-a-half or so personal computers and laptops and things like that. So that’s sort of the global picture. It is a very large, very distributed system.

But I want to go back in history, this is mid-1975 and we were experimenting with mobile radio and we needed this giant van at SRI International in Menlo Park, California, to do the experiments because the radios were about a cubic foot in size and cost $50,000 each.

But the point I wanted to make is that we were experimenting with packetized voice in the mid-1970s. And so a lot of the applications that you think of as new today have pioneering exposure, literally 35 years ago. Now this was particularly amusing because, in order to do this, we had to take the voice signal, which was 64,000 bits per second, and compress it down to 1,800 bits per second because there wasn’t very much capacity in the network in those days. And when you do that, you basically model the voice track as a stack of cylinders and you send the diameter of the cylinders to the other side, there is only 10 parameters plus a forming frequency, and the other guy inverts that to make sound.

It made everyone who talked through the system sound like a drunken Norwegian. And there’s a long story about trying to demonstrate this to a bunch of generals in the Pentagon which is pretty amusing, but they came away impressed that we could do other than data with a system like this. We were also experimenting with packetized video as well.

The list is really quite long now of things that are either currently networkable or will be networked in the future. Television, the mobile obviously, tablets, picture frames and things of that sort, lots of sensory systems are becoming part of this environment, and those systems are used for a variety of different purposes.

Some of them might be for security, some for environmental monitoring. In one case, agriculture, there is a guy that has a GPS location for every vine in his vineyard and he keeps track of the state of the soil, watering, pH, and everything else, literally on a vine-by-vine basis and he uses that data to decide how much water and what kinds of nutrients should be made available to each vine in his vineyard.

And that’s the sort of thing that is not at all unreasonable. Medical instrumentation is also becoming very common. Here is a simple example of an insulin pump, which is keeping track of the blood sugar levels on a continual basis and then the pump decides, based on that sample of information, whether or not to inject some amount of insulin into the body. That information could be captured, for example, by a mobile and then used for analytical purposes. And I think this notion of continuous monitoring is important for several reasons, not the least of which that continuously monitoring things tells you about the processes in a much more refined way than if you showed up at the doctor once every six months or once every three months or only when you’re sick.

And so this continuous monitoring is not just for the medical cases. It is for many other kinds of instrumentation and turned out to be a really important and valuable way of observing dynamic processes and then using that data to analyze their state. Many of you might be wearing Fitbit or might just be using applications in your mobile that are keeping track of how much movement during the day, whether you went up or down or sideways, how many steps did you take.

This, by the way, is also important because there is a feedback loop here. So one of the interesting things about gathering data in this way, with this Internet of things, is that you get feedback that tells you something about the consequences of the choices of your behavior in the course of the day or the month or the year.

In the case of electrical appliances, as in the Smart Grid, if you get enough information back about what devices you use during the course of the month that generated a bill, this might actually tell you or cause you to change the choices that you make because the costs might be less.

And you can imagine a third-party analyzing the data, which you presumably authorized, to tell you what steps you could take to change the way in which you use not only electricity, but possibly other consumable resources like water and gas and so forth.

Remotely controlled devices turn out to be pretty important, especially in crisis response. Knowing that the power is out in your home might be a very important thing to know, especially if you are not there. It is also helpful for the power company to know which houses are out of power. Often that’s not as easy to find out as you would like and, of course, it’s clumsy to have people call a telephone number to try to report that.

There are an increasing number of devices that we’ll call wearables. Google is experimenting with one called Google Glass. Here, I want to emphasize something interesting about this sort of Internet enabled device. The Google Glass is an experiment. What’s interesting about it is it is essentially no different, functionally, than strapping [a cell phone] to your forehead, but I can tell you this is very uncomfortable.

Google Glass is a little bit easier. It has a camera, it has a microphone, it has a bone conduction speaker so that you can hear what it is saying and no one else can, it also leaves your ears free to hear the ending sound and it has a little video display.

And the reason this is so interesting is that it brings the computer into your audio and video environment. It sees what you see and it hears what you hear. So here’s an example that we can almost do. Imagine you have a blind German speaker and you have a deaf American sign language speaker. They are both wearing Google Glass and they want to communicate with each other, so let’s see what happens.

The German guy says, “Guten nachtmittag. Ich heisse Vint Cerf.” Which is good afternoon, my name is Vint Cerf. And of course the deaf guy doesn’t hear this, but the Google glass picks up the sound, translates the German from German to English and then presents the English on the display so the deaf guy can actually see the captions.

Now, the deaf guy responds by signing, which the blind guy can’t see, but the camera in the Google Glass that the blind guy is wearing can see the signs, translate the signs into English, translates the English into German, and speaks that German through the bone conduction speaker in to the head of the blind German-speaker. So the two of them are now communicating thanks to the intermediation of this Google Glass.

I don’t want to mislead you into thinking that we can actually do all of that. We can come awfully close. The one thing that we can’t do right now is actually correctly interpret signs at speed, but this is not something that is crazy. I mean, this is the kind of engineering thing that is possible. And then, of course, there are lots of thoughts about having automobiles communicate with each other.

When you get into self-driving cars, you begin to see some fascinating possibilities for the utility of cars talking to each other. When all four of them come to an intersection, instead of one of them wanting to be macho and everything else, they just run the standard algorithm to figure out who goes next. They don’t have road rage, they’re not impatient. They just do the protocol, unlike human drivers.

So here’s an example of things that are already in use. The Internet-enabled refrigerator is interesting because I used to wonder what would you do with an Internet-enabled refrigerator. Well, one obvious thing is that it might have an nice touch-sensitive panel on the front and it augments the ordinary American family communication method, which is paper and magnets on the front of the refrigerator.

If you had an RFID detector inside the refrigerator and the things you put in had little RFID chips on them, the refrigerator would know what it had inside. So while you’re off at work, it is searching the Internet for recipes that it could know it could make with what it has inside. So when you come home, you see a display saying, here’s all the recipes you could make. And you could extrapolate on this, you could be on vacation and you get an email, it’s from your refrigerator, and it says you put the milk in there three weeks ago and it is going to crawl out on its own if you don’t do something.

Or you are shopping and your mobile goes off and it says, don’t forget the marinara sauce. I have everything else I need for a spaghetti dinner tonight.

But the Japanese have messed up this whole beautiful idyllic view. They’ve invented an Internet-enabled bathroom scale. You step on the scale and it figures out which family member you are, based on your weight, and it sends that information to the doctor and it becomes part of your medical record. Which is all perfectly reasonable except for one thing. The refrigerator is on the same network as the scale. So when you come home, you see diet recipes coming up.

Everybody is familiar with Internet-enabled picture frames. Many of you probably have them. They pull images from a selected website and then they will cycle through. We use them in our family: we have mobile phones with cameras in them, so we take pictures and upload them to a website with all of the family picture frames, download those pictures, and you get up in the morning and you kind of see what the nieces and the nephews and the grandchildren are doing.

There is a security issue here. If the website that has these pictures gets hacked, then the grandparents may see pictures of what they hope is not the grandchildren. There is a guy who has built an Internet-enabled surf board. I haven’t met him. I have an image of him sitting on the water, waiting for the next wave thinking, if I had a laptop in my surfboard I could be surfing the Internet while I’m waiting for the next wave.

So he built a laptop into the surfboard and he put a wi-fi service back at the rescue shack and now he sells this as a product. So if you want to go out on the water and surf the Internet while you are waiting for the next wave, that’s the product for you.

Mobiles are everywhere: Internet-enabled lightbulbs are being mentioned. I actually used to tell jokes about this 20 years ago. I’d say, someday every electric lightbulb will have its own IP address. Ha, ha. I thought that was funny, until I was given an IPv6 radio-enabled LED lightbulb. They cost about $20, they probably last about 15 years. The cost of putting the radio in might be 50 cents or something, which is not bad considering the total price of the lightbulb. And if it lasts for 15 years, maybe this isn’t so crazy.

This is another example. I have a sensor network in my house that is using IPv6. It is a radio-based 6LoWPAN system and this is a product — not me in the garage with the soldering gun. The company that made this was called Arch Rock, which was acquired by Cisco Systems a few years ago. Basically, each one of the devices is about the size of a mobile. It runs on two AA batteries for very nearly a year.

This thing is a mesh network, so when you turn it all on, it self-organizes and the storing forward hopping takes the data from each one of the sensors and ultimately delivers it through the mesh network to a server that is down in the basement in a rack of equipment. So it is measuring temperature, humidity, and light levels in each room in the house every five minutes.

I am actually very interested in gathering the data that way. I know it sounds like something only a geek would do, but think for a minute of having a year’s worth of information about heating, ventilation, and air conditioning in every room of the house. At the end of the year, you have a pretty good idea of how well the heat and cooling were distributed. You don’t have to rely only on anecdotal information, you have real engineering data to do that. And so that’s useful.

This is going to be a very common thing to do. I would expect this to be built into most new homes. It would be, certainly that plus many other kinds of security controls, heating, ventilation, air conditioning, other kinds of things, building on the notion of the smart home.

Smart cities are another extension of the smart home, the smart grid, and the smart devices. A city able to monitor what is going on in the city, with traffic flow being an obvious example, could make quite a big difference for people trying to select which routes to take.

At Google we bought a company called Waze and that is being reported as a crowd-source thing that you can imagine instrumenting the city to get even more precise data, dependent on simply voluntary reporting. But you can see that other kinds of information, like outages or usage of water or other kinds of gas and so on, all of that information could be available to a city for use in immediate operations and possibly also for use in projecting demand in the future.

So I have this sense of monitoring reporting in the city being a very powerful idea. There are some cities, like Barcelona, that are rapidly moving in that direction. So if you are interested in smart cities, you might do a Google search for Barcelona and smart city and see where they are.

It’s obvious that there are all kinds of things that the governments can do, local governments, state governments, and so on, to communicate with citizens about things that they care about. Whether it is license fees or taxes or other sorts of things, it is yet another example of smartness. It is not so much to do with sensors, it just has to do with city services being presented to users on a 24-hour basis.

After companies realized that they should be available to consumers 24 hours a day, the consumers started to say, why can’t the government do the same thing? I don’t want to hear “Sorry, our offices are closed.”

Another issue is access to the information that the city might be able to provide. And setting aside privacy concerns, not to ignore them, but merely to say if there is information which does not have a privacy issue associated with it, open access to information that the city knows about its operation could facilitate the creation of new businesses that gather the data or analyze it for purposes of being useful.

So this notion of using information from an online environment, from a monitored environment, is actually an opportunity to create new businesses, new jobs, and things of that sort. In fact, one of the interesting statistics I wish I had, and do not have, from the Labor Department is some sense of how rapidly jobs are changing. It would be interesting to look over five-year intervals at what jobs are commonly being occupied and what those tasks are and do those jobs still exist or, how many jobs are there that didn’t exist five years ago? And I think if you were to look, certainly in the high-tech industry, you would discover very quickly that jobs in that space change very, very rapidly. I mean, think about the world wide web in 1994, there were no webmasters. And now, of course, there are lots of them because they figured out how to be webmasters by looking at the HTML code in the web pages.

There is really enormous potential here for all kinds of optimizations based on the data that is accumulated and potentially shared. And so we should not lose track of the fact that having greater knowledge of how resources are consumed, when they are consumed, and at what rate and everything else, and aggregated over potentially larger and larger regions, could really tell us a great deal about how to manage those resources better.

Standards are important here because interoperability is very important. Even though there is a natural tendency in some product development to do things that are proprietary, locking into that particular standard, there is almost invariably pressure arising in the end to have common standards, so that devices are able to work.

If you go and buy an Internet-enabled device from Company A and then you buy another one from Company B, there are good reasons for you to want to know that they can both be managed through a piece of software that understands what the standards are and not have to be adapted to every possible proprietary protocol. It doesn’t mean that we will end up necessarily with exactly one protocol, but you certainly don’t want too many of them.

And by creating those standards, you create a real opportunity for new businesses to form, whether they are to manage the devices, to make the devices, to analyze the data coming from the devices, to control the devices, there are new businesses that can be formed. And we should care about that because these types of devices can create new job opportunities for all of us and improve GDP growth.

It’s obvious that we have health management and wellness opportunities similarly through this continuous monitoring, which we talked about before. There is even some very interesting educational implications of all of this. If you have Internet-enabled devices, you may be able to get access to information from anywhere and we are seeing that effect in the Internet with things called MOOCs, “massive online open courses.”

One observation I want to make about the MOOCs is that, if you do the math with regard to the economics of it, it’s pretty stunning. If you have 100,000 people taking a class and you charge each of them $10, it’s a million dollar class. There aren’t very many professors who can claim that they are teaching one million dollar classes.

And the cost per student is very low because of the scaling effect. So I am very excited about the potential to provide access to a large amount of educational material at a very modest cost to a very big audience. And by reducing the cost, you make it affordable to a larger cadre of people.

And second, because they are online and you can take them whenever you want to, continuing education becomes a pretty attractive possibility for people who want to continue to grow in their jobs. And it’s pretty obvious that as soon as it’s easy to Internet-enable things, people will go out and do that, so there will be new products and services on that basis.

The comments about privacy and the alerting of users to the use of information are well-intended. But I am thinking about the ordinary user who isn’t really either sure or may not have the patience to try to figure out exactly what does it mean, what are the implications of this particular piece of information being made available.

I think people are lazy and don’t want to be bothered and they just want stuff to work, which I think puts an even bigger burden on the implementers and the operators of these systems to be very, very cognizant of protecting users’ safety and their privacy.

It’s not simple to figure out what to do with all of the instrumentation and the data that comes back. But as I said, I think there are huge opportunities for analysis of that information. The other big problem is there are going to be bugs. And those bugs can either be hazardous, because they offer an attack surface to allow someone to take control over the device, or possibly through control, will get to other devices in the home network, or they will simply cause problems.

And getting things fixed is hard, especially if you don’t have a good model in your head for exactly how this stuff works. So by the way, that may actually create yet another set of job opportunities for people to come out and help fix your Internet-enabled devices when they don’t seem to work. That suggests, again, the potential opportunities for third-party businesses.

#b#Re-Inventing The Internet#/b#

While Vinton Cerf has been evangelizing for the Internet, Bob Kahn has been at work trying to remake it.

The son of a Brooklyn high school administrator, Kahn earned his bachelor’s in electrical engineering from the City College of New York in 1960 and a Ph.D. from Princeton in 1964. After working for Bell Labs and teaching at MIT, he began the work that led to the TCP/IP protocols.

Recently Kahn has been working with the International Telecommunication Union (ITU), a United Nations agency for information and communication technologies based in Europe, to develop a way to make data more trackable on the Internet, where pirated files currently flow freely. Below is an excerpt of an interview with Kahn from the ITU blog, posted on January 6:

Kahn’s latest project is the Digital Object (DO) Architecture. A key feature of the DO Architecture is the unique persistent identifier associated with each digital object. Imagine a large document or blog post with a lot of embedded URLs. After a certain amount of time those URLs will most likely become non-operational.

If you replace those URLs with unique persistent digital object identifiers then, if properly administered, the links will never be lost — because the identifier is now associated with a digital object rather than a port on a machine. That’s only part of the story though, DO Architecture is exciting technology, it also provides security features that can, for example, better enable transactions and rights management. Libraries and the film industry are among early adopters of this technology.

With DO Architecture were you trying to address current challenges or facilitate new ways of doing things or both?

In the late 1980s my colleague Vint Cerf and I perceived the need to move beyond the rather static methods being used to manage information in the Internet. This led to an effort which we called Knowbot programing, or more generally, mobile programing. We wrote a report — The Digital Library Project, Vol. 1: The World of Knowbots (March 1988) — that describes the basic components of an open architecture for a digital library system and a plan for its development. Certain information management aspects of this effort, in particular the identifier/resolution component, were later developed to become the basis for the Digital Object (DO) Architecture, an overview of which is available here.

ITU-T recently approved a global standard for the discovery of identity information that was based on CNRI’s contribution. What is this recommendation and why it is important?

With the proliferation of information systems in the Internet that has developed across the world, and with the associated creativity and innovation, a critical question has arisen: “What are the basic building blocks available to the public that will enable interoperability across such heterogeneous systems?”

[The recommendation] as based on CNRI’s DO Architecture … the notion of “digital object,” or more abstractly, “digital entity,” defined as an “entity” that is represented as, or converted to, a machine-independent data structure (of one or many elements) that can be parsed by different information systems, with each such digital entity having an associated unique persistent identifier.

These concepts are the basis for the deployment of systems of registries to improve the discovery and accessibility of not just identity-related management information, but information in digital form, more generally.

What can DO Architecture contribute in fields such as banking and healthcare towards security and privacy?

Security is a fundamental capability of the DO Architecture, which is not the case for other distributed management systems for information in digital form in the Internet.

The basic administration of the identifier/resolution component of the DO Architecture is based on a public key encryption (PKI) regime. The creator of a digital object (or more abstractly, digital entity) has the ability to restrict access to their objects to known users; people or machines known to the system by their respective identifiers.

This system allows for a direct correlation between the security measures deployed and the degree of privacy achieved. Think of the medical records doctors keep on patients. If a record is structured as a digital entity, access to this confidential information can be limited to authorized users, based on their identifiers and their ability to respond accurately to a PKI challenge. In some cases, access may mean permission to obtain a digital entity in its entirety. In other cases, access may mean permission to perform specific operations on all or part of the digital entity.

#b#Making Sense Of Digital Coin#/b

Given the recent headlines about Bitcoin, the failure of one of its key exchanges earlier this year, and the bewildering concept of digital currency that seems to be based on no defined asset, some of us might invest with Bernie Madoff before Bitcoin.

In fact, the theory behind Bitcoin and other digital currency is not totally incomprehensible. Say you’re itching to buy that couch you saw on Overstock.com. Until the beginning of this year, you would complete the purchase online by inputting your credit card information. Overstock.com would send you the couch — and the merchant (and indirectly you) would be hit with a 2 to 3 percent credit card processing fee.

Since January, however, customers at Overstock.com and a growing list of other retailers have had the option to pay in bitcoin, a digital currency that was first introduced in 2009. The benefits: an instant, irreversible transaction of which the customer is in full control, with minimal or no transactions fees. Earlier this month Overstock.com announced it had conducted more than $1 million in transactions using bitcoin.

Bitcoin is a crypto-currency: the creation of bitcoins, through a process called mining, and their transfer is controlled by cryptography. Bitcoins can be mined or can be purchased with traditional currency through bitcoin exchanges. A person’s bitcoins are stored in a virtual wallet, which ensures the validity of transactions: a bitcoin transferred from Joe to Bob comes with a digital signature unique to Joe’s wallet that allows Bob to verify that the bitcoin was in fact Joe’s to begin with.

Because they are relatively new and not widely used, small events can trigger big price changes. This means bitcoins are highly volatile — for example, a single bitcoin ranged in value from $680 to $691 during the 15 minutes it took to write this in the first week in March (and this value is in sharp contrast to the $1,200 price a bitcoin commanded in late November).

Princeton professor Ed Felten has a long history of taking a skeptical and critical view of information technology and Internet-related issues. Drawn to the uses of computers through his father, the manager of a plumbing supply firm that was computerizing its business operations, Felten majored in physics at California Institute of Technology. In 1993 he earned his Ph.D. in computer science at the University of Washington and came to Princeton’s Department of Computer Science that same year. In 1995 he started working on computer security and issues relating to software licensing.

Well before the collapse of the Bitcoin exchange earlier this year, Felten noted that the Bitcoin system might not be as stable as some believed. But he still sees the concept of a digital currency as an exciting development in the information age. Following are excerpts from a November 29 post to Felten’s blog, “freedom to tinker:”

Bitcoin is hot right now because of the recent run-up in its value. At the same time, Bitcoin is a fascinating example of how technology, economics, and social interactions fit together to create something of value.

Our Bitcoin work started with a paper by Josh Kroll, Ian Davey and me, about the dynamics and stability of the Bitcoin mining mechanism. There was a folk theorem that the Bitcoin system was stable, in the sense that if everyone acted according to their incentives, the inevitable result would be that everyone followed the rules of Bitcoin as written. We showed that this is not the case, that there are infinitely many outcomes that are stable yet differ from the written rules of Bitcoin. So the rule-following behavior that we currently see is at best stable in the weaker sense that if everyone else is following the rules (and no one mining entity has too much power) then deviating from the rules will cost you money.

Beyond this, we have built a better understanding of the “political economy” of Bitcoin — how the Bitcoin community governs itself to keep the system operating well, despite the lack of a central authority and despite the complicated issues around the theoretical stability of the protocol. The ultimate goal of this line of work is to understand how Bitcoin is likely to deal with challenges in the future, and whether there are feasible changes that could improve the governance of Bitcoin.

Since then, we have started several more Bitcoin-related projects. My faculty colleague Arvind Narayanan (who joined us last year) as well as several more students are working on Bitcoin, and the pace has accelerated. We’re building tools to track and diagnose the behavior of the peer-to-peer network that Bitcoin participants use to spread information about what is happening. We’re looking at the dynamics of mining pools, in which a group of miners cooperate to spread the risk inherent in mining. We’re considering new types of double-spending attacks and how to defend against them.

Let me highlight one current project: we’re designing a decentralized prediction market using the Bitcoin protocol. Prediction markets enable participants to trade “shares” on potentially any event with well-defined outcomes, such as a presidential election or sporting events. The market prices of these shares can be interpreted as the probability of the event occurring. Prediction markets offer societal benefits because of this ability to accurately aggregate the wisdom of crowds. Decentralization can improve prediction markets in various ways including robustness to closure (see Intrade), greater expressivity in defining markets and outcomes, and potentially lower fees leading to more accuracy in pricing unlikely events. . . .

The analogy is often made that Bitcoin will do to money what the Internet did to communications. If that is the case, many, many interesting and useful designs that use Bitcoin as an underlying protocol are waiting to be discovered. It’s an exciting time to be doing research in this area.

Facebook Comments