The Self-Driving Car's Bicycle Problem

Uber’s experiment in San Francisco showed that bicycles and bike lanes are a problem self-driving cars are struggling to crack

Photo: iStockphoto

Robotic cars are great at monitoring other cars, and they’re getting better at noticing pedestrians, squirrels, and birds. The main challenge, though, is posed by the lightest, quietest, swerviest vehicles on the road.

“Bicycles are probably the most difficult detection problem that autonomous vehicle systems face,” says UC Berkeley research engineer Steven Shladover.

Nuno Vasconcelos, a visual computing expert at the University of California, San Diego, says bikes pose a complex detection problem because they are relatively small, fast and heterogenous. “A car is basically a big block of stuff. A bicycle has much less mass and also there can be more variation in appearance — there are more shapes and colors and people hang stuff on them.”

That’s why the detection rate for cars has outstripped that for bicycles in recent years. Most of the improvement has come from techniques whereby systems train themselves by studying thousands of images in which known objects are labeled. One reason for this is that most of the training has concentrated on images featuring cars, with far fewer bikes. 

Consider the Deep3DBox algorithm presented recently by researchers at George Mason University and stealth-mode robotic taxi developer Zoox, based in Menlo Park, Calif. On an industry-recognized benchmark test, which challenges vision systems with 2D road images, Deep3DBox identifies 89 percent of cars. Sub-70-percent car-spotting scores prevailed just a few years ago.

Deep3DBox further excels at a tougher task: predicting which way vehicles are facing and inferring a 3D box around each object spotted on a 2D image. “Deep learning is typically used for just detecting pixel patterns. We figured out an effective way to use the same techniques to estimate geometrical quantities,” explains Deep3DBox contributor Jana Košecká, a computer scientist at George Mason University in Fairfax, Virginia.

However, when it comes to spotting and orienting bikes and bicyclists, performance drops significantly. Deep3DBox is among the best, yet it spots only 74 percent of bikes in the benchmarking test. And though it can orient over 88 percent of the cars in the test images, it scores just 59 percent for the bikes.

Košecká says commercial systems are delivering better results as developers gather massive proprietary datasets of road images with which to train their systems. And she says most demonstration vehicles augment their visual processing with laser-scanning (ie lidar) imagery and radar sensing, which help recognize bikes and their relative position even if they can’t help determine their orientation.

Further strides, meanwhile, are coming via high-definition maps such as Israel-based Mobileye’s Road Experience Management system. These maps offer computer vision algorithms a head start in identifying bikes, which stand out as anomalies from pre-recorded street views. Ford Motor says “highly detailed 3D maps” are at the core of the 70 self-driving test cars that it plans to have driving on roads this year.

Put all of these elements together, and one can observe some pretty impressive results, such as the bike spotting demonstrated last year by Google’s vehicles. Waymo, Google’s autonomous vehicle spinoff, unveiled proprietary sensor technology with further upgraded bike-recognition capabilities at this month’s Detroit Auto Show.

Vasconcelos doubts that today’s sensing and automation technology is good enough to replace human drivers, but he believes they can already help human drivers avoid accidents. Automated cyclist detection is seeing its first commercial applications in automated emergency braking systems (AEB) for conventional vehicles, which are expanding to respond to pedestrians and cyclists in addition to cars.

Volvo began offering the first cyclist-aware AEB in 2013, crunching camera and radar data to predict potential collisions; it is rolling out similar tech for European buses this year. More automakers are expected to follow suit as European auto safety regulators begin scoring AEB systems for cyclist detection next year.

That said, AEB systems still suffer from a severe limitation that points to the next grand challenge that AV developers are struggling with: predicting where moving objects will go. Squeezing more value from cyclist-AEB systems will be an especially tall order, says Olaf Op den Camp, a senior consultant at the Dutch Organization for Applied Scientific Research (TNO). Op den Camp, who led the design of Europe’s cyclist-AEB benchmarking test, says that it’s because cyclists movements are especially hard to predict.

Košecká agrees: “Bicycles are much less predictable than cars because it’s easier for them to make sudden turns or jump out of nowhere.”

That means it may be a while before cyclists escape the threat of human error, which contributes to 94 percent of traffic fatalities, according to U.S. regulators. “Everybody who bikes is excited about the promise of eliminating that,” says Brian Wiedenmeier, executive director of the San Francisco Bicycle Coalition. But he says it is right to wait for automation technology to mature.

In December, Wiedenmeier warned that self-driving taxis deployed by Uber Technologies were violating California driving rules designed to protect cyclists from cars and trucks crossing designated bike lanes. He applauded when California officials pulled the vehicles’ registrations, citing the ridesharing firm’s refusal to secure state permits for them. (Uber is still testing its self-driving cars in Arizona and Pittsburgh, and it recently got permission to put some back on San Francisco streets strictly as mapping machines, provided that human drivers are at the wheel.)

Wiedenmeier says Uber’s “rush to market” is the wrong way to go. As he puts it: “Like any new technology this needs to be tested very carefully.”

The Self-Driving Car's Bicycle Problem

AI Decisively Defeats Human Poker Players

An AI named Libratus has beaten human pro players in no-limit Texas Hold’em for the first time

Photo: Andrew Rush/Pittsburgh Post-Gazette/AP Photo
Poker pro Jason Les with computer mouse in hand plays against the Libratus AI.

Humanity has finally folded under the relentless pressure of an artificial intelligence named Libratus in a historic poker tournament loss. As poker pro Jason Les played his last hand and leaned back from the computer screen, he ventured a half-hearted joke about the anticlimactic ending and the lack of sparklers. Then he paused in a moment of reflection.

“120,000 hands of that,” Les said. “Jesus.”

Libratus lived up to its “balanced but forceful” Latin name by becoming the first AI to beat professional poker players at heads-up, no-limit Texas Hold’em.  The tournament was held at the Rivers Casino in Pittsburgh from 11-30 January. Developed by Carnegie Mellon University, the AI won the “Brains Vs. Artificial Intelligence” tournament against four poker pros by $1,766,250 in chips over 120,000 hands (games). Researchers can now say that the victory margin was large enough to count as a statistically significant win, meaning that they could be at least 99.7 percent sure that the AI victory was not due to chance.

Previous attempts to develop poker-playing AI that can exploit the mistakes of opponents—whether AI or human—have generally not been overly successful, says Tuomas Sandholm, a computer scientist at Carnegie Mellon University. Libratus instead focuses on improving its own play, which he describes as safer and more reliable compared to the riskier approach of trying to exploit opponent mistakes. 

“We looked at fixing holes in our own strategy because it makes our own play safer and safer,” Sandholm says. “When you exploit opponents, you open yourself up to exploitation more and more.”

Even more importantly, the victory demonstrates how AI has likely surpassed the best humans at doing strategic reasoning in “imperfect information” games such as poker. The no-limit Texas Hold’em version of poker is a good example of an imperfect information game because players must deal with the uncertainty of two hidden cards and unrestricted bet sizes. An AI that performs well at no-limit Texas Hold’em could also potentially tackle real-world problems with similar levels of uncertainty.

“The algorithms we used are not poker specific,” Sandholm explains. “They take as input the rules of the game and output strategy.”

In other words, the Libratus algorithms can take the “rules” of any imperfect-information game or scenario and then come up with its own strategy. For example, the Carnegie Mellon team hopes its AI could design drugs to counter viruses that evolve resistance to certain treatments, or perform automated business negotiations. It could also power applications in cybersecurity, military robotic systems or finance.

The Libratus victory comes two years after a first “Brains Vs. Artificial Intelligence” competition held at the Rivers Casino in Pittsburgh in April-May 2015. During that first competition, an earlier AI called Claudico fell short of victory when it challenged four human poker pros. That competition proved a statistical draw in part because it featured just 80,000 hands of poker, which is why the Carnegie Mellon researchers decided to bump up the number of hands to 120,000 in the second tournament.

The four human poker pros who participated in the recent tournament—Jason Les, Dong Kim, Daniel McAulay, and Jimmy Chou—spent many extra hours each day on trying to puzzle out Libratus. They teamed up at the start of the tournament with a collective plan of each trying different ranges of bet sizes to probe for weaknesses in the Libratus AI’s strategy that they could exploit. During each night of the tournament, they gathered together back in their hotel rooms to analyze the day’s worth of plays and talk strategy.

The human strategy of playing weird bet sizes had its greatest success in the first week, even if the AI never lost its lead from the beginning. Libratus held a growing lead of $193,000 in chips by the third day, but the poker pros narrowed the AI’s lead by clawing back $42,201 in chips on the fourth day. After losing an additional $8,189 in chips to Libratus on the fifth day, the humans scored a sizable victory of $108,775 in chips on the sixth day and cut the AI’s lead to just $50,513.

But Libratus struck back by winning $180,816 in chips on the seventh day. After that, the “wheels were coming off the wagon” for the human poker pros, Sandholm says. They noticed that Libratus seemed to become especially unbeatable toward the last of the four betting rounds in each game, and so they tried betting big up front to force a result before the fourth round. They speculated on how much Libratus could change its strategy within each game. But victory only seemed to slip further away.

One of the players, Jimmy Chou, became convinced that Libratus had tailored its strategy to each individual player. Dong Kim, who performed the best among the four by only losing $85,649 in chips to Libratus, believed that the humans were playing slightly different versions of the AI each day.

After Kim finished playing on the final day, he helped answer some questions for online viewers watching the poker tournament through the live-streaming service Twitch. He congratulated the Carnegie Mellon researchers on a “decisive victory.” But when asked about what went well for the poker pros, he hesitated: “I think what went well was… shit. It’s hard to say. We took such a beating.”

In fact, Libratus played the same overall strategy against all the players based on three main components. First, the AI’s algorithms computed a strategy before the tournament by running for 15 million processor-core-hours on a new supercomputer called Bridges. 

Second, the AI would perform “endgame solving” during each hand to precisely calculate how much it could afford to risk in the third and fourth betting rounds (the “turn” and “river” rounds in poke parlance). Sandholm credits the endgame solver algorithms as contributing the most to the AI victory. The poker pros noticed Libratus taking longer to compute during these rounds and realized that the AI was especially dangerous in the final rounds, but their “bet big early” counter strategy was ineffective.

Third, Libratus ran background computations during each night of the tournament so that it could fix holes in its overall strategy. That meant Libratus was steadily improving its overall level of play and minimizing the ways that its human opponents could exploit its mistakes. It even prioritized fixes based on whether or not its human opponents had noticed and exploited those holes. By comparison, the human poker pros were able to consistently exploit strategic holes in the 2015 tournament against the predecessor AI called Claudico.

By the end of the tournament, the poker pros had long since been resigned to their fate. Daniel McAulay, the last poker pro to finish his hands for the day, turned to an offscreen spectator and joked: “How much do I have to pay you to play the last 50 hands? Uhhhh, this is so brutal.”

The Libratus victory translates into an astounding winning rate of 14.7 big blinds per 100 hands in poker parlance—and that’s a very impressive winning rate indeed considering the AI was playing four human poker pros. Prior to the start of the tournament, online betting sites had been giving odds of 4:1 with Libratus seen as the underdog. But Sandholm seemed confident enough in the AI’s tournament performance to state that “there is no human who can beat Libratus.”

Despite the historic victory over humans, AI still has a ways to go before it can claim to have perfectly solved heads-up, no-limit Texas Hold’em. That’s because the computational power required to solve the game is still far beyond even the most powerful supercomputers. The game has 10160 possible plays at different stages—which may be more than the number of atoms in the universe. In 2015, a University of Alberta team demonstrated AI that provides a “weak” solution to a less complex version of poker with fixed bet sizes and a fixed number of betting rounds.

But as the defeated poker pros drifted away from their computer stations one by one, gloomy viewer comments floated up on the live-stream’s Twitch chat window. “Dude poker is dead!!!!!!!!!!!” wrote one Twitch user before adding “RIP poker.” Others seemed concerned about computer bots dominating future online poker games: “its tough to identify a bot from online poker rooms? ppl are terified [sic].”

There is some good news for anyone who enjoys playing—and winning—at poker. Libratus still required serious supercomputer hardware to perform its calculations and improve its play each night, said Noam Brown, a Ph.D. student in computer science at Carnegie Mellon University who worked with Sandholm on Libratus. Brown reassured the Twitch chat that invincible poker-playing bots probably would not be flooding online poker play anytime soon.

Another Twitch user asked if poker still counts as a skill-based game. The question seemed to reflect anxiety about the meaning of a game that millions of people enjoy playing and watching: What does it all mean if an AI can dominate potentially any human player? But Sandholm told the Twitch chat that he sees poker as “definitely a skill-based game, no question.”

“People are worried that my work here has killed poker: I hope it has done the exact opposite,” Sandholm said. “I think of Poker and no limit [Texas hold’em] as a recreational intellectual endeavor in much the same way as composing a symphony or performing ballet or playing chess.”

As the final day of the tournament wound down, the Carnegie Mellon professor thanked the online viewers for watching and supporting the competition. And he took the time to answer a number of lingering questions about the new AI overlord of poker.

“Does Libratus call me daddy?” Sandholm read aloud a Twitch chat question. “No, it can’t speak.”

AI Decisively Defeats Human Poker Players

Video Friday: Muscle for Tough Robots, Cobots on Wheels, and WALK-MAN Goes for a Walk

Your weekly selection of awesome robot videos

Image: WALK-MAN Project via YouTube
WALK-MAN humanoid robot steps over an obstacle during a demonstration.

Video Friday is your weekly selection of awesome robot videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next two months; here’s what we have so far (send us your events!):

RoboBusiness Europe – April 20-21, 2017 – Delft, Netherlands
NASA SRC Virtual Competition – June 12-16, 2017 – Online

Let us know if you have suggestions for next week, and enjoy today’s videos.


Opportunity is 13 years old! That’s 13 years of roving Mars without a single tune-up. Not bad, right?

Happy birthday Oppy!!

[ NASA JPL ]


The hydraulic high-power muscle has been developed by Suzumori Endo Robotics Laboratory at Tokyo Institute of Technology and Bridgestone Corporation as part of the Impulsing PAradigm Change through disruptive Technologies program (ImPACT) Tough Robotics Challenge which is an initiative of the Cabinet Office Council for Science, Technology and Innovation. The muscle is 15 mm in diameter and generates 700 kgf contraction force.

Suzimori Endo Lab ]


Finally, a way for Baxters and Sawyers to flee from those awful boring jobs people keep giving them:

Dataspeed ]


At the German Aerospace Center (DLR) Institute of Robotics and Mechatronics, we are developing a research platform for an assistive robotic system. The platform called EDAN (EMG-controlled Daily Assistant) consists of a robotic manipulator, which is mounted on a power wheelchair. The focus of this research is twofold. On the one hand, we investigate the use of Electromyography (EMG) as a non-invasive interface to provide people with control over assistive systems. On the other hand, we develop methods to simplify the usage of such systems with the support of artificial intelligence.

DLR EDAN ]


WALK-MAN demonstrates its skill at not falling over while navigating around obstacles, and also at doing useful stuff. We’re impressed.

This research is a collaboration between IIT, EPFL, UNIPI, KIT, and UCL.

WALK-MAN ]

Thanks Dimitris!


EPFL is responsible for some of those incredible robotic spy animals that the BBC has been using to try to out-Attenborough itself:

Here, have some more, because cuuuute!

[ EPFL ] and [ BBC ]


Not many bipedal robots could walk dynamically on a steep, slippery hill. MARLO can do it even with a broken ankle (!):

Looks like this research will be presented at ICRA, and a preview of the paper is at the link below.

[ Paper ] via [ MARLO ]


The Digger Foundation’s urban mine clearing project involves a Human-Dog-Robot Interaction (HDRI) system, reminding us that dogs are way better at some things than humans or robots will ever be.

This project is one of the finalists for the UAE Robotics for Good prize, and honeslty, I’m not sure how much more good you can get then helping people not get blown up.

[ Digger Foundation ] via [ Robotics for Good ]

Thanks Adrien!


Interns at Dorabot used a pair of UR5 arms and some 3D printed grippers to make dumpling for Chinese New Year:

[ Dorabot ]

Thanks Betty!


It’s Chinese New Year in Singapore this weekend (and everywhere else), and Relay is getting into the spirit of things:

Here’s a video of how Relay works from perspective of the hotel staff, including the SECRET CODE that lets you control Relay directly:

[ Savioke ]


Somebody needs to try this with one of those beefed-up delivery drones. Better bring some spare heads for that poor dummy.

Researchers at Virginia Tech — home to both a Federal Aviation Administration-designated test site for unmanned aircraft systems and a world-renowned injury biomechanics group — are developing methods to evaluate the risk posed by small unmanned aircraft to anyone on the ground. This research is key to enabling flights over people. Federal Aviation Administration regulations for unmanned aircraft systems, or UAS, currently prohibit these flights unless a special waiver is granted. The regulations are designed to prevent injuries if an aircraft descends unexpectedly or the pilot loses control. But they present challenges for efficiently conducting operations that otherwise seem ideally suited for unmanned aircraft, such as package delivery and aerial journalism. And most applications face steep hurdles in densely populated areas, where it would be virtually impossible to ensure that there was no one beneath an aircraft’s flight path.

[ Virginia Tech ]


If you need a company to make an inspiring commercial for you, you can hire Robotiq. They also sell robot hands.

[ Robotiq ]


iRobot has done some incredible things for the robotics community, including founding National Robotics Week and helping to enable affordable research with the Create platform.

There are a limited number of Create 2 units available right now from iRobot with PrimeSense 3D sensors for $300.

iRobot Create 2 ]


This week’s CMU RI Seminar: Carrick Detweiler on “Micro-UAS for Prescribed Fire Ignition.”

Unmanned Aerial System (UASs) are increasingly being used for everything from crop surveying to pipeline monitoring. They are significantly cheaper than the traditional manned airplane or helicopter approaches to obtaining aerial imagery and sensor data. The next generation of UASs, however, will do more than simply observe. In this talk, I will discuss recent advances we have made in the Nimbus Lab in developing the first UAS that can ignite prescribed fires. Prescribed fire is a critical tool used to improve habitats, combat invasive species, and reduce fuels to prevent wildfires. In the United States alone federal and state governments use prescribed burns on over 3 million acres each year, with private land owners prescribing even more. Yet this activity can be extremely dangerous, especially when performing interior ignitions in difficult terrain. In this talk, I will discuss the history of this project and the challenges associated with flying near and igniting fires. In addition, I will detail the mechanical and software design challenges we have had to overcome in this project. I will also present the results of the first two prescribed burns that were successfully ignited by a UAS. Finally, I will discuss automated software analysis techniques we are developing to detect and correct system errors to reduce risk and increase safety when using UASs to ignite prescribed burns.

[ CMU RI ]

Video Friday: Muscle for Tough Robots, Cobots on Wheels, and WALK-MAN Goes for a Walk