24th June 2017

Rockstar urges Take-Two to ease off on Grand Theft Auto modders

Rockstar has issued a statement through its support knowledge base solidifying their support for user-created mods of its games.

“Rockstar Games believes in reasonable fan creativity, and, in particular, wants creators to showcase their passion for our games,” the post begins. “After discussions with Take-Two, Take-Two has agreed that it generally will not take legal action against third-party projects involving Rockstar’s PC games that are single-player, non-commercial, and respect the intellectual property (IP) rights of third parties.”

This statement seems to be a response to the Grand Theft Auto 5 community’s intense negative reaction to publisher Take-Two Interactive’s cease-and-desist order against the creator of OpenIV, a long-standing modding tool for Grand Theft Auto 4, Grand Theft Auto 5 and Max Payne 3. Rockstar Games said that Take-Two’s decision was based on the fact that “OpenIV enables recent malicious mods that allow harassment of players and interfere with the GTA Online experience for everybody.”

In the wake of Take-Two’s order, Steam users hammered Grand Theft Auto 5’s store page with negative reviews, plummeting it to “Overwhelmingly Negative” status.

One caveat to this new agreement between Rockstar and Take-Two is that it does not apply to mods that affect online multiplayer, which was the reason given for the order against OpenIV. Another exception is made against the “use or importation of other IP (including other Rockstar IP),” which does not bode well for canceled fan-developed remakes like Red Dead Redemption V.

Rockstar has confirmed that it has reached out to OpenIV’s developer Yuriy “Good-NDS” Krivoruchko in an attempt to resolve this dispute.

23rd June 2017

SpaceX successfully launches reused Falcon 9 rocket, recovers first stage

SpaceX has successfully launched a Falcon 9 to orbit during its BulgariaSat-1 mission Friday. The launch reused a first stage booster first employed during an Iridium Communications mission in January of this year, after that Falcon 9 first stage was recovered and refurbished.

The mission on Friday was to delivery Bulgaria’s first ever geostationary communications satellite into orbit, so that it could begin operations broadcast communication networks and HDTV signals for clients on Earth in parts of Europe. The satellite was built by SSL out of Palo Alto, California.

The first stage also landed successfully on the company’s drone ship in the Atlantic ocean, “Of course I still love you,” making it the first ever to have successfully landed on both of SpaceX’s two ocean-borne drone landing vessels. That recovery was also the most challenging successful landing so far for SpaceX, because of launch conditions.

SpaceX founder and CEO Elon Musk warned on Twitter prior to the launch that the Falcon 9 used would “experience its highest ever reentry force” during today’s launch, along with record levels of heat, making it less likely that SpaceX will be able to successfully recover the rocket for reuse this time around. In fact, Musk said there’s a “good chance” they don’t get the rocket back.

The launch from Pad 39A at Kennedy Space Center is the second ever re-use of an orbital rocket, after SpaceX successfully completed its first reflight of recycled Falcon 9 first stage back in March.

The primary mission was also a success, with SpaceX deploying the BulgariaSat-1 satellite into geostationary transfer orbit as of around 3:45 PM EDT. Next up, SpaceX will attempt its Iridium-2 mission launch on Sunday, June 25.

23rd June 2017

VC Justin Caldbeck is taking an indefinite leave of absence, apologizes to the women he ‘made feel uncomfortable’

In light of allegations of sexual harassment and unwanted sexual advances, Binary Capital co-founder and managing partner Justin Caldbeck is taking an indefinite leave of absence, he said in a statement provided to TechCrunch.

In his apology statement, Caldbeck did not outright admit nor deny the allegations of the female founders who came forward. Instead, he directed his apology “first to those women who I’ve made feel uncomfortable in any way, at any time – but also to the greater tech ecosystem, a community that I have utterly failed.”

As Leslie Miley noted on Twitter, the way Caldbeck kicked off his apology letter with words are how hard the last 24 hours have been on him. That’s because women in tech and in the workplace at larger have been dealing with this type of nonsense since forever.

Not good to sarting off with how dark his past 24 hours have been. Women spent years dark places due to this type of behavior. https://t.co/XSlbJMA6Tl

— Shaft (@shaft) June 23, 2017

Below is Caldbeck’s full statement.

The past 24 hours have been the darkest of my life. I have made many mistakes over the course of my career, some of which were brought to light this week. To say I’m sorry about my behavior is a categorical understatement. Still, I need to say it: I am so, so sorry.

I direct my apology first to those women who I’ve made feel uncomfortable in any way, at any time – but also to the greater tech ecosystem, a community that I have utterly failed.

The power dynamic that exists in venture capital is despicably unfair. The gap of influence between male venture capitalists and female entrepreneurs is frightening and I hate that my behavior played a role in perpetrating a gender-hostile environment. It is outrageous and unethical for any person to leverage a position of power in exchange for sexual gain, it is clear to me now that that is exactly what I’ve done.

I am deeply ashamed of my lack of self-awareness. I am grateful to Niniane, Susan, Leiti, and the other women who spoke up for providing me with a sobering look into my own character and behavior that I can no longer ignore. The dynamic of this industry makes it hard to speak up, but this is the type of action that leads to progress and change, starting with me.

I will be taking an indefinite leave of absence from Binary Capital, the firm I co-founded in 2014. I will be seeking professional counseling as I take steps to reflect on my behavior with and attitude towards women. I will find ways to learn from this difficult experience – and to help drive necessary changes in the broader venture community.

The Binary team will also be taking measures to ensure that the firm is a safe place for founders of all backgrounds to find the support and resources they need to change the world, without abuse of power or mistreatment of any person.

I owe a heartfelt apology to my family, my investors, my portfolio, and the team at Binary, who have been completely blindsided and in no way deserve the pain I’ve caused. But most of all I apologize again to those who I’ve hurt during the course of my career – and for the damage I’ve done to the industry I care so deeply about.

23rd June 2017

DNP – Snowden, Putin, and Hollywood’s hero-worship miasma

What’s so compelling about Oliver Stone’s recent four-part interview series with Vladimir Putin is probably not what the multi-Oscar-winning director intended. It’s the same thing that makes his Snowden biopic its own sort of cipher after the fact.

Both have inadvertently, and strangely by their own design, upset the already shaky foundations of toxic hero worship in the era of hackers, hacktivism, and cyber-espionage.

Stone’s four-part documentary The Putin Interviews premiered over the past week on Showtime. Prior to its airing, the tone was set by a tense appearance on The Late Show With Stephen Colbert in which Stone repeatedly refused to say anything bad about Putin.

When Stone went into a diatribe about how Putin refuses to bad-mouth anyone despite his having been “insulted and abused,” Colbert’s audience was outright laughing at the director. Colbert acidly joked as to whether Putin had Oliver Stone’s dog in a cage somewhere.

That fiasco took a backseat to this week’s development. In the documentary series, Putin shows Stone a phone video of the Russian Air Force kicking major ass against militants in Syria. The internet, being more into fact-checking than a Hollywood director, quickly debunked the video as 2009 footage of a US strike on the Taliban in Afghanistan.

The Kremlin maintains that the video is from the Russian defense ministry. When asked about the veracity of the footage at a press conference, Stone dismissed and devalued the question, characterizing the issue as “blogging bullshit.”

A similar fallout happened with Stone’s film Snowden. Setting aside all the ham-fisted dialogue and painfully dramatic shorthand for both narrative structure and character development, the film also had some not-insignificant fact-checking issues. For instance, the audience is shown that Snowden is the stereotypical hacker wunderkind with several examples that turned out not to tell the real story. We found out later that he was a sysadmin, not a genius developer, and one who only passed the NSA’s famously brutal hacker test because he got his hands on a copy of the answers.

'Snowden' Europe Premiere In Munich

None of this is to dismiss the power and epic explorations of films like Stone’s Platoon, Natural Born Killers, or Born on the Fourth of July. Nor is it to undermine the conversation started by Snowden’s stolen files. Though I think it’s time to argue that the kind of blind hero worship we’re seeing in Stone’s recent work typifies how conversations about hacking, surveillance, and human rights are being done a damaging disservice.

Both works are beyond sympathetic to their subjects; they pose as documentary, but instead are interpretations of reality. The main character in each is being portrayed faultlessly. Snowden is a hero who had no choice but to do the moral thing; Stone has been up front that his goal with the interview series it to exonerate Putin from what the director sees as misplaced anger about Trump. (Adamantly refusing to believe anything about Russian cyber-espionage and the election, Stone in January labeled it all hysteria writing, “I never thought I’d find myself praying for the level-headedness of a Donald Trump.”)

Until the end of May, Putin’s soundbite on Russian election hacking and interference was that it was all BS — until he made public statements suggesting a maybe-they-did scenario in which he told press that perhaps “patriotic” Russian hackers had done the dirty in supporting Trump with hacks and various manipulations last year.

For anyone who gives a shit, which ostensibly a documentary director does, this means that there’s no ground to take Putin at his word on the topic. But you wouldn’t know that by watching Stone’s documentary.

The thing about belief in Hollywood and blurred lines in pseudo-docu films is that it has a tendency to leave people thinking that what they’re watching is vetted, fact-checked, and a matter of record. That when Putin tells a very sympathetic Stone the same lines he’s been feeding access journalists like Megyn Kelley, there must be credibility established somewhere. That the Russian president cracking sexist and homophobic jokes is somehow not the same one whose country is right now rounding up gay people to torture and kill in concentration camps.

Or in Snowden’s flattering depiction, that state surveillance is little more than something that threatens to reveal our embarrassing sexual indiscretions — as if there wasn’t nearly two decades in place of people trying to call attention to domestic state surveillance abuses. Or maybe showing why the minimizing of “surveillance harms,” by those who stand to benefit from its power, from law enforcement to corporations, leads to very real set of harms that become virtual border walls and involuntary facial recognition registries

By no one questioning what a Hollywood director wanted to believe about Snowden or the context around his actual story, the end result isn’t charming or heroic, or even very accurate. It’s not even a good story.

To those of us in the know of infosec and hacking, who have quietly watched all this from the inside, there’s a far more interesting story to be told about Snowden. And it echoes Stone’s own Achilles’ heel — would anyone dare to criticize a hero like Snowden whose own problematic hero worship could explore the very questions Stone only pretends to ask of his subjects?

Hacking culture — especially its activist arms — is equally to blame for films like Snowden. Maybe Stone’s gullible and believes all the hype, or maybe he was just aping the more popular cyber-activist sycophants. Perhaps Oliver Stone simply channeled the black and white thinking of “all US government bad” and the poisonous hero worship that’s rampant in the limelight-chasing, class-conscious circles of pop culture infosec. Sound familiar? The same thread seems to run through Putin’s talking points, too. There’s no conspiracy, but for someone like Stone, it all lines up.

Day 2 - Starmus Festival 2017: Life and The Universe

And as we’ve seen with almost everything coming out of Hollywood about hacking, hacktivism, and infosec in the past five years, it lacks the ability to criticize its subjects. Which is something the topic — and the people headlining on the infosec / hacktivism stages — need more than ever. Trust me: Uncritical hero worship is the very last thing anyone needs in this realm right now. Same goes for world leaders.

In the hacked Sony emails, George Clooney wisely remarked on learning of Oliver Stone winning the race to tell Snowden’s story that it would be a “hatchet job” but it would be the one everyone remembers.

The inadvertent cipher of Stone’s folly — or gullibility — becomes an undressing of how history is permanently disfigured.

Sadly it’s not an unusual narrative for Stone anymore, who has more recently tended to like conspiracy theories spun from megalomaniacal oracles over his prior affection, which was telling challenging stories about morally conflicting antiheroes.

Take for example the Stone film that forced pop culture into a sharp left turn in the early 1990s: Natural Born Killers. That film amplified our collective compulsion to be attracted to serial killers through charismatic psychopaths acid tripping hard on their own fame. It showed that Stone’s films can grab us by the ID, leave marks, and leave us asking Sir politely for more.

Except now, chasing his champions in cyber-espionage on the world stage likely won’t go down in history as the letters-of-record Stone seems to be gunning for.

Instead, it may go down in history as expensively produced propaganda.

Images: Photo by Gisela Schober/Getty Images (Snowden, Joseph Gordon-Levitt); Photo by Michael Campanella/Getty Images (Oliver Stone)

23rd June 2017

‘Pokémon Go’ badasses can now play Raid Battles

Some Pokémon Go players can now start working in groups to take on powerful creatures in Raid Battles, one of the long-awaited features for the popular augmented reality game. There’s a catch, though: Pokémon trainers have to be level 35 and up to access the new co-op mode, so don’t get too excited if you’re a noob. In addition to that, developer Niantic says the Raid Battles are only live at “select” Pokémon Go Gyms around the world, though it’s unclear which exactly. Those of you who do happen to be near one will have the chance to capture rare Pokémon and unlock a handful of other rewards, such as Rare Candies, Golden Razz Berries and different types of Technical Machines.

Update: Apparently Pokémon Go lowered the bar and players with a 31 level can join the Raid Battles.

Trainers level 35 and above: You can now participate in Raid Battles at select Gyms around the world. pic.twitter.com/spg1okmpw8

Update: Apparently Pokémon Go lowered the bar and players with a 31 level can join the Raid Battles.

Trainers level 31 and above: You can now participate in Raid Battles at select Gyms around the world.

23rd June 2017

Surface Pro review: Microsoft’s best hybrid notebook plays it safe

The Surface Pro is everything we’ve ever wanted from Microsoft’s Surface line. It fixes the few remaining problems from the Surface Pro 4, a machine that I adored. And yet, it’s far less exciting than its predecessors. It’s the second incremental upgrade since the Surface Pro 3, and while there’s something to be said for sticking with a solid design, in a way it feels like we’ve seen all of this before. Given that it’s been a year and a half since the launch of the Surface Pro 4, I expected more.

The Surface Pro doesn’t look significantly different from the previous two models, even though a lot has changed under the hood. That’s not a bad thing: The slim metallic case is still pretty attractive, and Microsoft has rounded out its edges a bit so it’s more comfortable to hold. Every model of the Surface Pro weighs around 1.7 pounds, which is a bit hefty for a tablet, but incredibly light for an ultraportable notebook. Notably, the Core i5 model is a bit lighter than before, thanks to an ingenious fanless design.

The iconic kickstand is once again a key feature for the Surface Pro, and now it can be lowered even further to 165 degrees. Microsoft calls this orientation “studio mode,” as it’s ideal for digital artists to use for drawing. And, of course, it’s also reminiscent of its unique Surface Studio all-in-one PC, which also has a screen that tilts into an easel-like angle. The kickstand’s hinge looks a bit different than the Pro 4’s, but it otherwise works the same. Opening and closing the kickstand is as smooth as ever, and I grew to enjoy using the studio mode for doodles.

If you haven’t used a Surface before, the Pro’s kickstand might take some getting used to. It’s easy enough to use on a flat table — just pull it out and find the ideal angle for the screen — but it’s trickier to orient on your lap. That’s particularly true if you’re resting it against your bare legs, as the kickstand’s edge tends to dig into your skin after a while. It’s not impossible to use it on your lap, though. I had no problem writing most of this review while sitting with the Surface Pro on an uncomfortable park bench.

When it comes to ports, not much has changed. There’s only a single USB 3.0 jack, MiniDisplayport, a MicroSD card slot tucked underneath the kickstand, a headphone jack and the power connection. Just like with the Surface Laptop, it’s disappointing to see Microsoft skip on USB-C, which would make its machines more versatile and easier to charge.

You may have noticed a recurring theme so far: A distinct lack of change. While it’s nice to see more refinement in the Surface Pro line, it’s beginning to feel a bit stagnant. It’s 2017, it would be nice to see Microsoft try for thinner screen bezels, which is all the rage on Dell’s XPS line and the newer iPad Pro, or add in some completely new functionality.

Once again, the Surface Pro packs in a gorgeous 12.3-inch screen with a sharp 2,736 by 1,834 pixel resolution (267 pixels per inch). That’s on-par with what Apple would call a “Retina” display, which means it’s ideal for high resolution photos, as well as for making text look extra smooth. It’s also a great screen for video, though you’ll have to live with black bars on the screen due to its 3:2 aspect ratio.

The Surface Pro’s screen manages to look bright and bold no matter what you throw at it. And while it’s a bit reflective, like most tablet screens, you can still make out what’s on the screen in direct sunlight. It’s also a reminder of why Microsoft has stuck with the 3:2 aspect ratio, whereas most other devices opt for 16:9 widescreen these days. Having much more vertical screen space is simply very useful when it comes to browsing the web and using productivity apps.

While the Surface Pro is technically a tablet on its own, it transforms into a full laptop with its keyboard accessories. The new cloth-like Alcantara keyboard feels slightly improved from the last model, with a satisfying amount of depth to every key press. Their touchpads are incredibly smooth and responsive, as well. Unfortunately, the keyboards are still sold separately for $130, and you’ll have to shell out $160 if you want the more premium Alcantara covering. The latter is something we also saw on the Surface Laptop, and while it might seem like a mistake to put cloth right beside your keyboard, I had no problem cleaning off mild stains with a damp cloth. It’s less clear how it’ll last in the long term, though.

While Microsoft unveiled a new Pen to go alongside the Surface Pro, it’s no longer included in the box. Even worse, it’s more expensive than the last model at $100. At least you’re getting a decent upgrade for your money. It features 4,096 levels of pressure — twice as much as before — and it’s even more useful for artists, since you can tilt it to the side to shade your drawings.

The new Pen feels a bit thicker, and it attaches to the Surface Pro with a stronger magnet. It still features an anachronistic eraser button at the top, which serves as a quick way to delete your drawings, as well as a shortcut for Windows’ Sticky Notes and OneNote. But it loses the previous model’s clip, which was great for securing it to your shirt pocket. (Perhaps even Microsoft thought that was too nerdy.)

Most importantly, the revamped Pen simply feels better to use. It’s a lot more like putting pen to paper and has just the right amount of resistance. I’m not much of an artist, so I can’t speak to its illustrative capabilities. But I grew to appreciate the Pen’s smooth writing performance for jotting down quick notes. Of course, for $100, you should make sure you’ll use it for more than just that.

E2,927 / P1,651 / X438

E2,697/ P1,556/ X422

The Surface Pro packs in Intel’s 7th generation CPUs, which, in addition to being faster than the last model, also adds new features like hardware 4K video decoding. And, as I mentioned before, the mid-range Core i5 model is now fanless, like the entry-level Core M3 variant. Not only does that make it a bit lighter, it’s also completely silent even when it’s being stressed. That’s a pretty big deal, since that Core i5 chip is a full-fledged dual core CPU, not an underpowered processor like some other fanless designs.

Performance-wise, the Surface Pro is on-par with most other ultraportables, including the Surface Laptop. It never felt like I was settling for a lesser experience, which is more than I can say for the iPad Pro or any Android hybrid tablets. Since it runs Windows 10 Pro, you can install any Windows application, and it’ll run just just fine. It kept up with my daily workflow, which typically involves having dozens of browser tabs open, along with photo editing apps, Spotify and Evernote. Since it’s relying on integrated graphics, don’t expect to play any heavy-duty games, though.

While the Surface Pro is a bit heavier than a typical tablet, it’s still comfortable to use for reading comics and ebooks. And it does a far better job at disappearing into your bag than most ultraportables. Versatility has always been Microsoft’s goal with the Surface Line, and this is clearly the company’s most successful entry yet.

Battery life

The biggest improvement this time around is in battery life. While the Pro 4 lasted for 7 hours and 15 minutes during our battery test, which put it on the low-end for ultraportables, the Surface Pro kept going for 13 hours and 40 minutes. And when it comes to more real-world usage, it typically had around 20 percent of battery life left after a full work day. Now it’s in a class of notebooks, like the MacBook Air and Surface Laptop, that I can take anywhere without worrying about carrying a charger.

The Surface Pro starts at $799 with a Core M3 processor, 4GB of RAM and 128GB of storage. These days, that’s just downright paltry. If you were going to get one, I’d suggest saving up for the $1,299 Core i5 model with 8GB of RAM and a 256GB SSD. Yes, that’s a lot of money, especially when you add $100 for the Pen and another $130 (or $160!) for a keyboard. The pricing makes a bit more sense when compared directly to laptops — but laptops also come with keyboards. You can do better, Microsoft.

I’ve long argued that Microsoft should be bundling keyboards with the Surface Pros, but this year it’s taking a step even further backwards by making the Pen an additional purchase. Simply put: this is a mistake. Other companies are quickly improving their own hybrid laptop/tablet designs, like Lenovo’s Thinkpad X1 Tablet and HP’s Spectre X2, and it won’t be long before they surpass the Surface Pro in terms of features and ingenuity.

It’s possible that Microsoft isn’t being aggressive with the Surface Pro’s pricing because it wants the wider hybrid PC market to thrive. They’re all running Windows, and still an essential part of Microsoft’s ecosystem. So with the Surface, rather than aiming to crush its competitors, Microsoft has to to balance its own success with supporting other PC makers as well.

With all of its improvements, the Surface Pro sits atop the heap of hybrid laptops out there. But I can’t help but feel like Microsoft missed an opportunity to show the competition how it’s done. It’s pricier than it needs to be, and it doesn’t make any design leaps over previous generations.

As it stands, the Surface Pro is a fantastic machine, but it’s not enough of an improvement for Surface Pro 4 owners to upgrade. Perhaps Microsoft was more focused on the Surface Laptop this year, but hopefully we’ll see bigger changes with the next Surface Pro.

23rd June 2017

Pokémon Go’s new gyms award coins faster after complaints

Pokémon Go’s revamped gyms remain a work in progress, as trainers are quickly discovering. It appears that Niantic is keeping an eye on early critiques of the features, though, as the developer has already tweaked how the maligned coin payouts work.

Players found that they were earning far less of the game’s currency, PokéCoins, under the new gym system. In the past, having one Pokémon defend a gym would automatically grant users 10 PokéCoins per day, even if it was quickly ousted from its spot. When the update launched, players instead received coins on an hourly basis — and just one for every hour their Pokémon sat in a gym. And Pokémon must now be defeated before players see any of their prize money, so those whose Pokémon are the very best will just horde coins forever before being forced to turn them over to their owners.

For busier areas where Pokémon knock out other defenders on a regular basis, this meant that some trainers received no rewards for their efforts, since their Pokémon spent less than an hour in the gym. Gyms have become a greater time investment for those who can even manage to hold onto any of them for any amount of time.

“I can understand the goal with this update was to encourage more gym activity,” wrote Redditor LordParkin in a thread criticizing the new coin payout system. “For the moment, while this system is fresh and new, I’m sure there is indeed more activity. But if the system doesn’t change so that players get rewarded fairly for their time invested, it’s hard to see how gym activity won’t rapidly decline in the near future — which is surely in no-one’s interest.”

It seems as though Niantic has already caught on to the issues detailed in this and numerous other threads. Players now report that the amount of time it takes to accrue coins has gone down to every 10 minutes, as opposed to every 60. They’ll still need their Pokémon to return to them to see the dividends, but even if they manage to hold a gym for an hour, they’ll get six times as many coins as before.

As the new gyms and Raid Battles slowly make their way to all players, don’t be surprised if Niantic continues to make changes to how they work. Compared to Pokémon Go’s first summer, the developer seems to be making the changes fans want far more quickly.

23rd June 2017

Take a look inside The Art of Prey

Prey is a game anchored in a remarkable sense of place. In The Art of Prey, a collaboration between developer Arkane Studios and Dark Horse Books, you can explore its creation. The book chronicles the creation of a space station called Talos 1 where terrible things happen as it floats on the far side of the moon.

You don’t have to wait until The Art of Prey’s June 27 release date to peer behind the scenes, though. Thanks to Dark Horse Books, you’ll find several pages below that show everything from grayscale renditions of the game’s characters, environments and mysterious alien plague to the color palettes chosen to set the mood in Talos 1’s art deco-inspired environments.

If you’ve played Prey, you’ll notice something else in the images, too: Many of the environments look quite similar to the concept art that preceded it. You won’t have to squint much to see how paintings evolved to become the game. Instead, you’ll recognize landmarks which seem to have existed with a remarkable amount of clarity long before they became 3D environments on PlayStation 4, Xbox One and Windows PC earlier this year.

23rd June 2017

Celebrate Sonic’s 26th birthday with these rad sneakers

Sonic the Hedgehog turns 26 today, an age accompanied with little fanfare. A Twitter account dedicated to his birthday does have a list of ways to help fans celebrate with Sonic, but these limited-edition sneakers seem like the coolest option.

Japan’s Anippon collaborated with the Sonic the Hedgehog brand to produce these swanky slip-ons, which are modeled after Sonic’s own iconic footwear. Unlike Sonic’s shoes, these have the speedster’s name printed on the heel, to ensure that everyone is well aware of the inspiration behind them.

The sneakers will set you back 7,020 yen with shipping, or about $63, and come in U.S. men’s sizes six through 10.5. They’re on sale starting today, which is kind of like Sonic giving us a present on his own birthday.

Sega’s got some other Sonic-branded swag on sale today, like pins and keychains. It’s those sneakers that are most enticing, though — have you ever seen Sonic run without them? It may not be too much of a stretch to say that they’re responsible for his high speed. I probably should buy a pair and expense them. Y’know … for research.

23rd June 2017

Google now has all the data it needs, will stop scanning Gmail inboxes for ad personalization

Here’s a surprise announcement from Google: It will stop scanning the inboxes of Gmail’s free users for ad personalization at some point later this year.

Google already doesn’t do this for business users who subscribe to its G Suite services, but until now, it routinely scanned the inboxes of its free users to better target ads for them. It then combined that information with everything else it knows about its users to build its advertising profiles for them.

Diane Greene, Google’s senior VP for Google Cloud, says the company made this decision because it “brings Gmail ads in line with how we personalize ads for other Google products.”

Google won’t stop showing ads in Gmail, though, and it’s worth noting that given how much the company already knows about all of its users, it just might not need these additional signals from Gmail. And maybe they even turned out to be relatively useless or even detrimental for ad performance.

As much as I’d like to believe that Google is doing this out of the goodness of its heart, chances are the only reason the company would make any changes to its advertising products is because it has data that shows it doesn’t need this additional information about its users. So far, the inbox scanning doesn’t seem to have hampered Gmail’s growth; it now has 1.2 billion users.

That, of course, is not what Greene says in today’s announcement. According to the official line, the idea here is to more closely align G Suite’s Gmail and consumer Gmail.

23rd June 2017

Watch SpaceX launch a re-used Falcon 9 rocket live right here

SpaceX is attempting to re-launch a Falcon 9 it used first in January, taking off from Launch Complex 39A at Kennedy Space Center with a two-hour launch window opening at 2:10 PM EDT today (UPDATE: The launch is now set for 3:10 PM EDT as SpaceX will be conducting additional ground checks). The mission, BulgariaSat-1, will attempt to deliver a geostationary orbital commercial comms satellite, the first in Bulgaria’s history, to orbit. The broadcast above should kick off around 15 minutes prior to the opening of the launch window.

The first stage rocket in the Falcon 9 was originally used to launch the Iridium-1 mission from Vandenberg this past January, meaning the rocket has been refurbished and cleared for a return to flight in just six months. If successful, this will be the second re-use of a recovered first stage for SpaceX, and the fastest turnaround time for returning a used rocket to flight ever for the company.

The payload is a satellite designed to provide direct-to-home TV and data connections to parts of Europe, including HDTV and UHDTV broadcast programming. It’s a new client for SpaceX, and this is a key moment in terms of proving out its relaunch capabilities, so there’s a lot riding on today going well. Also, SpaceX is currently aiming to launch another mission, for return client Iridium, in under 48 hours from California’s Vandenberg Air Force Base, so the stakes are higher still.

23rd June 2017

TumbleSeed may never recoup its costs, team says

Roly-poly roguelike TumbleSeed has its fans, but the game didn’t turn out to be the success that developer aeiowu had hoped it’d be. In a postmortem for the Nintendo Switch, PlayStation 4 and Windows PC game, the team opens up about the “post-release depression, poor reviews and bad sales” that followed TumbleSeed’s launch.

It looks unlikely that TumbleSeed will make back its budget, wrote Greg Wohlwend, lead designer at aeiowu. He pinpointed a handful of factors contributing to this, with critical reception and the game’s tough difficulty level leading the charge.

“We released TumbleSeed on May 2nd to the critical consensus that it was ‘too hard,’” Wohlwend wrote. “Large outlets gave us tepid scores and though others scored us higher, it was a too big hit to the collective opinion of our potential audience. Many considered TS unfair and unforgiving. That’s the wrong kind of hard and this stigma permeated the discussion of our game.”

Polygon was among those that criticized TumbleSeed’s difficulty, but it wasn’t just professional reviewers who had trouble with the unique platformer. Only .2 percent of players actually completed the game, based on aeiowu’s data; few others progressed past its early Jungle area.

TumbleSeed’s an unconventional kind of game, Wohlwend explained, and the ask on players to learn its control scheme, game system, randomly generated mountains, myriad enemies and varied powers was perhaps too great.

“It’s a pressure cooker filled with gunpowder that only a monk could endure,” he said of the game’s “overwhelming” amount of moving parts.

A combination of low scores and the intimidating gameplay meant that the development team needed to sell two times as many copies in order to recoup costs. To make amends, aeiowu is retooling the game to make it a more accessible experience.

An update for the PC version of the game is out now, adding four new areas to play on as well as abilities that should make things a bit easier, like lowering the damage taken and giving players the chance to sneak past enemies.

“While I don’t think this update will change the course of our success, it does feel really good to know we gave it our all especially when it was hardest to,” wrote Wohlwend. “Working on this update acted as a sort of therapy for all of us.”

The “4 Peaks Update” will come to consoles as soon as possible, but PC players can try an easier version of TumbleSeed now.

23rd June 2017

Titanfall 2 getting new maps and more in next free DLC pack

Titanfall 2’s sixth add-on, The War Games, will bring in two new maps and a game-changing feature when it launches June 27, developer Respawn Entertainment announced this week.

The main map, War Games, is an urban environment created in a pilot simulator pod. It features a mix of wide-open city streets for titan combat and multistory buildings with large windows for pilot battles. There are also a few open tank garages in which both types of forces can duke it out, along with a bonus for acrobatic pilots: bridges between the buildings that are perfect for wallrunning with their Tron-esque glowing colored sides.

Respawn is also including a new map called Traffic for Live Fire, the intense pilots-only mode that the studio added to Titanfall 2 in February. Another addition to the game in the War Games add-on is an execution named Shadow Boxing, which players can unlock by getting 20 pilot kills while a holopilot ability is active.

But perhaps the most intriguing new feature in The War Games is a third weapon slot for pilots. In addition to the usual primary and secondary slots in the weapon loadout, pilots will now be able to equip an anti-tank item. This means that players will no longer have to decide between taking a pistol or anti-tank weapon onto the battlefield.

The War Games, like the rest of Titanfall 2’s downloadable content, will be available free on PlayStation 4, Windows PC and Xbox One.

23rd June 2017

Modern Warfare Remastered finally gets a stand-alone release

Activision will release a stand-alone version of Call of Duty: Modern Warfare Remastered — available both physically and digitally — on June 27. The game will be available first on PlayStation 4, with Windows PC and Xbox One versions to follow.

The stand-alone release will cost $39.99 and will not include the Modern Warfare Remastered Variety Map Pack.

Activision did not provide release dates for the Windows PC and Xbox One versions of the game.

Call of Duty: Modern Warfare Remastered was originally released alongside last year’s Call of Duty: Infinite Warfare. The remaster of 2007’s Call of Duty 4 was only available as part of Infinite Warfare’s Legacy, Legacy Pro, and Digital Deluxe editions.

Activision said in a release that it has a special event called Call of Duty “Days of Summer” planned for Modern Warfare Remastered players to commemorate the launch.

The five-week community celebration begins June 27th and will feature a bevy of in-game giveaways, XP events and new playlists across multiple titles, including a new summer-themed map for Modern Warfare Remastered that will be available through the duration of the event, along with much more to be announced on Tuesday.

23rd June 2017

The next video game controller is your voice

For all of modern gaming’s advances, conversation is still a fairly unsophisticated affair. Starship Commander, an upcoming virtual reality game on Oculus and SteamVR, illustrates both the promise and challenge of a new paradigm seeking to remedy that: using your voice.

In an early demo, I control a starship delivering classified goods across treacherous space. Everything is controlled by my voice: flying the ship is as simple as saying “computer, use the autopilot,” while my sergeant pops up in live action video to answer questions.

At one point, my ship is intercepted and disabled by a villain, who pops onto my screen and starts grilling me. After a little back and forth, it turns out he wants a deal: “Tell you what, you take me to the Delta outpost and I’ll let you live.”

I try to shift into character. “What if I attack you?” I say. No response, just an impassive yet expectant stare. “What if I say no?” I add. I try half a dozen responses, but — perhaps because I’m playing an early build of the game or maybe it just can’t decipher my voice– I can’t seem to find the right phrase to unlock the next stage of play.

It’s awkward. My immersion in the game all but breaks down when my conversational partner does not reciprocate. It’s a two-way street: If I’m going to dissect the game’s dialogue closely to craft an interesting point, it has to keep up with mine too.

The situation deteriorates. The villain eventually gets fed up with my inability to carry the conversation. He blows up my ship, ending the game.

Yet there is potential for a natural back and forth conversation with characters. There are over 50 possible responses to one simple question from the sergeant — “Is there anything you’d like to know before we start the mission?” — says Alexander Mejia, the founder and creative director at Human Interact, which is designing the game. The system is powered by Microsoft’s Custom Speech Service (similar technology to Cortana), which sends players’ voice input to the cloud, parses it for true intent, and gets a response in milliseconds. Smooth voice control coupled with virtual reality means a completely hands-free, lifelike interface with almost no learning curve for someone who’s never picked up a gamepad.

Speaking certainly feels more natural than selecting one of four dialogue options from a menu, as a traditional roleplaying game might provide. It makes me more attentive in conversation — I have to pay attention to characters’ monologues, picking up on details and inconsistencies while coming up with insightful questions that might take me down a serendipitous narrative route (much like real life). No, I don’t get to precisely steer a ship to uncharted planets since voice control, after all, is not ideal for navigating physical space. But, what this game offers instead is conversational exploration.

Video games have always been concerned with blurring the lines between art and real life.

Photorealistic 4K graphics, the disintegration of levels into vast open worlds, virtual reality placing players inside the skull of another person: The implicit end goal of every gaming advance seems to be to create an artificial reality indistinguishable from our own. Yet we communicate with these increasingly intelligent games using blunt tools. The joystick/buttons and keyboard/mouse combinations we use to speak to games do little to resemble the actions they represent. Even games that use lifelike controls from the blocky plastic Time Crisis guns to Nintendo Switch Joy-Cons still involve scrolling through menus and clicking on dialogue options. The next step is for us to talk to games.

While games that use the voice have cropped up over the years — Seaman on Sega’s Dreamcast, Lifeline on the PlayStation 2, Mass Effect 3 on the Xbox 360’s Kinect — their commands were often frustratingly clunky and audio input never seemed more than a novelty.

That may be coming to an end. Well-rated audio games have appeared on the iPhone such as Papa Sangre and Zombies, Run! At E3 this month, Dominic Mallinson, a Sony senior vice president for research and development, referred to natural language understanding among “some of the technologies that really excite us in the lab right now.”

More than anything, the rush by Microsoft, Google, Amazon and Apple to dominate digital assistants is pushing the entire voice computing field forward. In March, The Information reported that Amazon CEO Jeff Bezos wants gaming to be a “killer app” for Alexa, and the company has paid developers that produce the best performing skills. Games are now the top category for Alexa, and the number of customers playing games on Echo devices has increased tenfold in the last year, according to an Amazon spokeswoman. “If I think back on the history of the world, there’s always been games,” says Paul Cutsinger, Amazon’s head of Alexa voice design education. “And it seems like the invention of every new technology comes along with games.”

Simply: If voice assistants become the next major computing platform, it’s logical that they will have their own games. “On most new platforms, games are one of the first things that people try,” says Aaron Batalion, a partner focused on consumer technology at venture capital firm Lightspeed Venture Partners. “It’s fun, engaging and, depending on the game mechanics, it’s often viral.” According to eMarketer, 35.6 million Americans will use a voice assistant device like Echo at least once a month this year, while 60.5 million Americans will use some kind of virtual voice assistant like Siri. The question is, what form will these new games take?

Gaming skills on Alexa today predominantly trace their lineage to radio drama — the serialized voice acted fiction of the early 20th century — including RuneScape whodunnit One Piercing Note, Batman mystery game The Wayne Investigation and Sherlock Holmes adventure Baker Street Experience.

Earplay, meanwhile, has emerged as a leading publisher of audio games, receiving over $10,000 from Amazon since May, according to Jon Myers, who co-founded the company in 2013. Myers describes their work as “stories you play with your voice,” and the company crafts both their own games and the tools that enable others to do the same.

For instance, in Codename Cygnus, you play a James Bond-esque spy navigating foreign locales and villains with contrived European accents, receiving instructions via an earpiece. Meanwhile, in Half, you navigate a surreal Groundhog Day scenario, picking up clues on each playthrough to escape the infinitely repeating sequence of events.

Like a choose-your-own-adventure novel, these experiences intersperse chunks of narrative with pivotal moments where the player gets to make a decision, replying with verbal prompts. Plot the right course through an elaborate dialogue tree and you reach the end. The audio storytelling activates your imagination, yet there is little agency as a player: The story chugs along at its own pace until you reach each waypoint. You are not so much inhabiting a character or world as co-authoring a story with a narrator.

“What you see with the current offerings from Earplay springs a lot out of what we did at Telltale Games over the last decade,” says Dave Grossman, Earplay’s chief creative officer. “I almost don’t even want to call them games. They’re sort of interactive narrative experiences, or narrative games.”

Grossman has had a long career considering storytelling in games. He is widely credited with creating the first game with voice acting all the way through — 1993’s Day of the Tentacle — and also worked on the Monkey Island series. Before arriving at Earplay, he spent a decade with Telltale Games, makers of The Wolf Among Us and The Walking Dead.

Earplay continues this genre’s bloodline: The goal is not immersion but storytelling. “I think [immersion] is an excellent thing for getting the audience involved in what you want, in making them care about it, but I don’t think it’s the be-all-end-all goal of all gaming,” says Grossman. “My primary goal is to entertain the audience. That’s what I care most about, and there are lots of ways to do that that don’t involve immersing them in anything.”

In Earplay’s games, the “possibility space”– the degree to which the user can control the world — is kept deliberately narrow. This reflects Earplay’s philosophy. But it also reflects the current limitations of audio games. It’s hard to explore physical environments in detail because you can’t see them. Because Alexa cannot talk and listen at the same time, there can be no exchange of witticisms between player and computer, only each side talking at pre-approved moments. Voice seems like a natural interface, but it’s still essentially making selections from a multiple choice menu. Radio drama may be an obvious inspiration for this new form; its overacted tropes and narrative conventions are also well-established for audiences. But right now, like radio narratives, the experience of these games seem to still be more about listening than speaking.

Untethered, too, is inspired by radio drama. Created by Numinous Games, which previously made That Dragon Cancer, it runs on Google’s Daydream virtual reality platform, combining visuals with voice and a hand controller.

Virtual reality and voice control seem to be an ideal fit. On a practical level, speech obviates the need for novice gamers to figure out complicated button placements on a handheld controller they can’t see. On an experiential level, the combination of being able to look around a 360 degree environment and speaking to it naturally brings games one step closer to dissolving the fourth wall.

In the first two episodes, Untethered drops you first into a radio station in the Pacific Northwest and then into a driver’s seat, where you encounter characters whose faces you never see. Their stories slowly intertwine, but you only get to know them through their voice. Physically, you’re mostly rooted to one spot, though you can use the Daydream controller to put on records and answer calls. When given the cue, you speak: your producer gets you to record a radio commercial, and you have to mediate an argument between husband and wife in your back seat. “It’s somewhere maybe between a book and a movie because you’re not imagining every detail,” says head writer Amy Green.

The game runs off Google’s Cloud Speech platform which recognizes voice input, and may return 15 or 20 lines responding to whatever you might say, says Green. While those lines may meander the story in different directions, the outcome of the game is always the same. “If you never speak a word, you’re still gonna have a really good experience,” she says.

This is a similar design to Starship Commander: anticipating anything the player might say, so as to record a pre-written, voice-acted response.

“It sounds like a daunting task, but you’d be surprised at how limited the types of questions that people ask are,” says Mejia of Human Interact. “What we found out is that 99% of people, when they get in VR, and you put them in the commander’s chair and you say, “You have a spaceship. Why don’t you go out and do something with it?” People don’t try to go to the fast food joint or ask what the weather’s like outside. They get into the character.”

“The script is more like a funnel, where people all want to end up in about the same place,” he adds.

Yet for voice games to be fully responsive to anything a user might say, traditional scripts may not even be useful. The ideal system would use “full stack AI, not just the AI determining what you’re saying and then playing back voice lines, but the AI that you can actually have a conversation with,” says Mejia. “It passes the Turing test with flying colors; you have no idea if it’s a person.”

In this world, there are no script trees, only a soup of knowledge and events that an artificial intelligence picks and prunes from, reacting spontaneously to what the player says. Instead of a tightly scripted route with little room for expression, an ideal conversation could be fluid, veering off subject and back. Right now, instead of voice games being a freeing experience, it’s easy to feel hemmed in, trapped in the worst kind of conversation — overly structured with everyone just waiting their turn to talk.

An example of procedurally generated conversation can be found in Spirit AI’s Character Engine. The system creates characters with their own motivations and changing emotional states. The dialogue is not fully pre-written, but draws on a database of information — people, places, event timeline — to string whole sentences together itself.

“I would describe this as characters being able to improvise based on the thing they know about their knowledge of the world and the types of things they’ve been taught how to say,” says Mitu Khandaker, chief creative officer at Spirit AI and an assistant professor at New York University’s Game Center. Projects using the technology are already going into production, and should appear within two years, she says. If games like Codename Cygnus and Baker Street Experience represent a more structured side of voice gaming, Spirit AI’s engine reflects its freeform opposite.

Every game creator deals with a set of classic storytelling questions: Do they prefer to give their users liberty or control? Immersion or a well-told narrative? An experience led by the player or developer? Free will or meaning?

With the rise of vocal technology that allows us to communicate more and more seamlessly with games, these questions will become even more relevant.

“It’s nice to have this idea that there is an author, or a God, or someone who is giving meaning to things, and that the things over which I have no control are happening for a reason,” says Grossman. “There’s something sort of comforting about that: ‘You’re in good hands now. We’re telling a story, and I’m going to handle all this stuff, and you’re going to enjoy it. Just relax and enjoy that.'”

In Untethered, there were moments when I had no idea if my spoken commands meaningfully impacted the story at all. Part of me appreciated that this mimics how life actually works. “You just live your life and whatever happened that day was what was always going to happen that day,” Green says. But another part of me missed the clearly telegraphed forks in the road that indicated I was about to make a major decision. They are a kind of fantasy of perfect knowledge, of cause and effect, which don’t always appear in real life. Part of the appeal of games is that they simplify and structure the complexity of daily living.

As developers wrestle with this balance, they will create a whole new form of game: one that’s centered on complex characters over physical environments; conversation and negotiation over action and traditional gameplay. The idea of what makes a game a game will expand even further. And the voice can reduce gaming’s barrier to entry for a general audience, not to mention the visually and physically impaired (the Able Gamers Foundation estimates 33 million gamers in the US have a disability of some kind). “Making games which are more about characters means that more people can engage with them,” says Khandaker. “Not everybody is necessarily into games which are about violence or shooting but everyone understands what it is to talk to people. Everybody knows what it is to have a human engagement of some kind.”

Still, voice gaming’s ability to bring a naturalistic interface to games matters little if it doesn’t work seamlessly, and that remains the industry’s biggest point to prove. A responsive if abstract gamepad is always preferable to unreliable voice control. An elaborate dialogue tree that obfuscates a lack of true intelligence beats a fledgling AI which can’t understand basic commands.

I’m reminded of this the second time I play the Starship Commander demo. Anticipating the villain’s surprise attack and ultimatum, I’m already resigned to the only option I know will advance the story: agree to his request.

“Take me to the Delta outpost and I’ll let you live,” he says.

“Sure, I’ll take you,” I say.

This time he doesn’t stare blankly at me. “Fire on the ship,” he replies, to my surprise.

A volley of missiles and my game is over, again. I take off my headset and David Kuelz, a writer on the game who set up the demo, has been laughing. He watched the computer convert my speech to text.

“It mistook ‘I’ll take you’ for ‘fuck you,'” he says. “That’s a really common response, actually.”

23rd June 2017

Synthetic iris could let cameras react to light like our eyes do


An artificial iris can open and close in response to sunlight without any other outside control, just like the ones in your eyes. This could help improve cameras and, eventually, repair damaged human eyes or control tiny robots that react to their surroundings.

In the eyes of humans and many other animals, the pupil is a hole that lets light inside the eyeball. The iris is the coloured part of your eye, a thin circle that controls the size of the pupil, modulating how much light gets through.

In bright light, the iris contracts to shrink the pupil, protecting the sensitive retina inside your eye, which sends visual signals to the brain. In the dark, the iris opens to let in more light so you can see. The same concept is used in cameras, which have an aperture that opens or closes to admit the right amount of light to create an image.


googletag.cmd.push(function() { googletag.display(‘mpu-mid-article’); });

googletag.cmd.push(function() { googletag.display(‘video-mid-article’); });

Such artificial apertures normally require an external sensor to tell them when to open or close. But now, Arri Priimägi at Tampere University of Technology in Finland and his colleagues have created one that opens and closes on its own.

To build their synthetic iris, they started with a thin disc 14 millimetres across, on which 12 radial petals were cut through the middle without reaching the edge – like a poorly sliced pizza. The disc is made of polymerised liquid crystal elastomer, a rubbery material that changes shape in response to heat.

When in the dark, each petal is bent and curled outward, leaving a round pupil-like hole in the middle. To make the iris respond to light like our eyes do, rather than to heat, the researchers added a red dye to their liquid-crystal mixture. When blue or green light hits the dye, it heats up, triggering the petals to curl back down and close the aperture.

“We shine light on the material and it changes its shape,” says Priimägi. “This self-regulation is new in this work and it’s what makes us excited about it.”

The team was motivated by the fact that artificial irises used now to treat humans with eye problems cannot change the size of the pupil – they are essentially just fixed contact lenses. With a set pupil size that is generally quite small and suited to bright sunlight, patients lose much of their sight in the dark.

Priimägi says the device is not quite ready to be implanted in a human eye because it doesn’t have precise enough control over aperture size and only responds to fairly strong light. “This is the first step – maybe we can go there one day,” he says.

“This is great, but applications will come down to the details,” says Jeremy Lerner, president of LightForm, Inc, a US imaging instrumentation company. “It depends on how fast it closes, how much light it lets through, and at what wavelengths.”

The artificial iris can close in seconds, but that will need to be sped up to the millisecond level for many applications, such as in sensitive cameras that could be ruined by suddenly pointing at a bright object. It may also need to close more tightly – at present, it still lets around 10 per cent of light through when fully shut.

But the researchers say these issues can be resolved. They hope the iris could eventually be used in microrobotics as sensors for tiny machines that can react to their surroundings.

“It’s an exciting case of a new world opening up with autonomous soft apertures driven by light in robots,” says Mark Warner at the University of Cambridge. “It’s a very nice piece of work.”

Journal reference: Advanced Materials, DOI: 10.1002/adma.201701814

More on these topics:

23rd June 2017

Gorgeous robot adventure Machinarium modernized with new engine and more

Czech studio Amanita Design has updated Machinarium, one of the most beloved and acclaimed graphic adventure games of the past decade, refreshing it with major new features like a new engine and gamepad support.

“We’ve reprogrammed Machinarium from the scratch,” said Amanita in its Steam Community announcement of the game’s Definitive Version yesterday. The original game ran in Flash, which didn’t allow it to scale well across screen sizes. Amanita has replaced it with a custom DirectX-based engine that works well on modern high-resolution displays, even in fullscreen view.

Other additions include gamepad compatibility, which allows for playing Machinarium in Steam Big Picture mode. The game explicitly works with the Xbox 360 and Xbox One controllers, and offers “experimental support” for others. There are also 12 new Steam achievements, Steam Cloud saves, Steam leaderboard functionality and localization into 14 different languages.

There’s one major drawback to the Definitive Version. Because it is a complete reprogramming of Machinarium, it is not compatible with save files from the original version of the game — in fact, it will delete them all. Amanita gave advance notice of this issue, and noted that the Definitive Version includes a feature that’s meant to soften the blow: Lost Save, which lets players jump straight into six different points throughout the adventure so they won’t have to replay the entire game.

Machinarium’s Definitive Version is now available as a free update on Steam, but only on Windows PC. The game’s Mac version is sold on Steam as well; the update is coming soon on that platform. Machinarium is also available on a variety of other platforms and digital storefronts, and Amanita said it “will soon start updating the game on other outlets.” The Linux version, however, is “gonna take us a bit more time.”

Amanita originally released Machinarium in October 2009 on Linux, Mac and Windows. The studio announced in July 2016 that lifetime sales had topped 4 million copies. The game is currently discounted by 75 percent to just $2.49 in this year’s Steam Summer Sale, which ends July 5.

23rd June 2017

Pokémon Go’s new Raid Battles are live, but only for the best of the best (update)

You can now check out Raid Battles, Pokémon Go’s cooperative multiplayer feature — but only if you’re good enough. Players level 35 and up can start taking down super-strong Pokémon together, with lower-leveled trainers getting their chance in the coming days.

Niantic announced that level 35 trainers can check out Raid Battles “at select gyms around the world” on the game’s Twitter account yesterday, which prompted a mix of responses. While most are excited for some co-op play, level 35 is high up there — the game’s level cap is 40. (We at Polygon aren’t able to check out Raid Battles quite yet, for instance, because we’re just a lowly level 18.)

This is just a temporary restriction, so anxious Pokémon Go players should keep that in mind. Once Raid Battles are available to everyone worldwide, they won’t come with any level caps, and any mix of players can work together to take on some overpowered foes. It wouldn’t hurt for those of us trailing behind the level 35-plus set to start working on our Pokémon Go game in the meantime, though.

Update: Players level 31 and up can now try Raid Battles as of 12:30 p.m. ET today.

23rd June 2017

Tesla said to be in talks to create its own streaming music service

Tesla might be a music service operator soon, in addition to a maker of electric cars and solar energy products. That’s according to a new report from Recode, which says that Tesla has been talking to music labels to make this happen.

The planned offering could start with a free, Pandora-like streaming radio option, which theoretically would be tied to Tesla vehicle ownership, one imagines. This sounds like a bizarre road for Tesla to take, but founder and CEO Elon Musk hinted that the company was exploring music products at the most recent Tesla shareholder meeting in early June.

Musk’s comments including noting that at present, it’s “very hard to find good playlists or good matching algorithms” for music you want to hear while driving, and that Tesla would make an announcement about how it could address that later this year.

Musk’s sly comments sounded like the teasing of someone who has a juicy secret and can’t help but reveal just a bit of what they know, so it’s likely this is another project inspired by personal experience, much like The Boring Company, which was borne from Musk’s exasperation with LA traffic.

As to why Tesla feels the need to go it alone here, instead of just working closely with another partner, that remains to be seen – it’s also possible this could still end up taking the form of a partnership, depending on how label talks proceed.

And even though it seems weird, if Tesla is thinking ahead to a future in which cars operate autonomously for much of the time, services will be a key business for Tesla to have a hand in, especially those that make the most sense for use in-car during trips.

23rd June 2017

Algorithmia raises $10.5M Series A round led by Google’s new AI fund

About a month ago, word spread that Google had quietly launched a new fund for investing into AI companies. Now this fund has made its first (or at least its first public) investment. Led by Google’s VP of engineering for AI, Anna Patterson, this new fund is leading a $10.5 million Series A into Algorithmia, a marketplace and enterprise solution that allows developers to easily tap into its catalog of 3,500 algorithms, functions and machine-learning models.

Other participants in this round include new investor Work-Bench, as well as current investors Madrona Venture Group, Rakuten Ventures and Osage University Partners.

As Algorithmia founder and CEO Diego Oppenheimer told me, there was a lot of excitement for this funding round among VCs, mostly because the service enables other companies to easily make use of recent machine learning advances. “There are a lot of people coming to VCs and saying: We are AI for this — and we are AI for that,” he said. “But there aren’t that many that are enabling this. The back-end operations, the scaling. Everybody believes that toolset is necessary.”

The Seattle-based company currently has 45,000 developers on its platform, and the algorithms on the site stem from university researchers and individual developers from across the globe.

“We were impressed with Algorithmia’s engineering capabilities and community promise,” said Patterson, Google’s VP of engineering for AI and the head of the company’s new AI fund. “They’ve built a secure and scalable marketplace for AI models that allows developers to openly collaborate.”

As Oppenheimer also stressed, the company has recently started to bring on more enterprises and even some intelligence agencies thanks to its ability to run their workloads in private, secured clouds. That’s a growing business for Algorithmia, and surely one of the reasons there was a lot of VC interest in this round.

As Oppenheimer noted, Algorithmia’s so-called CODEX platform (PDF) was built to be portable, and the service currently runs on AWS, Azure and the  Google Cloud Platform. In addition, the team also supports private deployments on OpenStack, which its customers in the financial and telecom businesses were requesting. Given that the service essentially runs its users’ random code on its servers, the team has long focused on security, and that’s something that’s paying off now that it is talking to financial institutions and government agencies.

Given that Google has now invested in the company, I couldn’t help but ask Oppenheimer about potential acquisitions. “We have no intention of that yet. We are 100 percent committed to building out this business,” he said. He did add, though, that the company has received acquisition offers and that he could’ve “gone the acquisition round.” He stressed that Google wasn’t one of the companies that approached him about buying the company.

Algorithmia currently only has 13 people on staff. Most of these are engineers. With the new funding, the team will likely expand to 23 or so by the end of the year. Because of its success in the enterprise, the company also plans to open an office for its sales team and solutions engineers in New York City.

23rd June 2017

Amazon dreams of putting a giant drone beehive in your city

Patents don’t mean anything right up until the moment that they mean everything, so take it as read that none of this could happen. Amazon has, however, registered a patent for a concept that it’s calling a “Multi-Level Fulfillment Center for Unmanned Aerial Vehicles.” Which is a fancy way of saying that it wants to build enormous cylindrical warehouses at the heart of towns and cities. Rather than delivery folks driving parcels to your home, the building will be jam-packed with drones, which fill fly in and out of the location’s many windows.

As much of an eyesore as it would be, Amazon’s idea does solve a few fairly obvious problems with being a logistics company in a city. After all, warehouse space requires plenty of land, which is at a premium in a dense urban environment. A vertical building would eliminate some of that issue, while robots and human pickers roam the floors selecting the right Blu-ray for the drone to collect. Hell, imagine the dystopias, a few generations later, as post-apocalyptic humans worship those shining towers that provide food and clothing with its army of flying robots.

23rd June 2017

The Air Force and IBM are building an AI supercomputer

Supercomputers today are capable of performing incredible feats, from accurately predicting the weather to uncovering insights into climate change, but they still by and large rely on brute processor power to accomplish their tasks. That’s where this new partnership between the US Air Force and IBM comes in. They’re teaming up to build the world’s first supercomputer that behaves like a natural brain.

IBM and the USAF announced on Friday that the machine will run on an array of 64 TrueNorth Neurosynaptic chips. The TrueNorth chips are wired together like, and operate in a similar fashion to, the synapses within a biological brain. Each core is part of a distributed network and operate in parallel with one another on an event-driven basis. That is, these chips don’t require a clock, as conventional CPUs do, to function.

What’s more, because of the distributed nature of the system, even if one core fails, the rest of the array will continue to work. This 64-chip array will contain the processing equivalent of 64 million neurons and 16 billion synapses, yet absolutely sips energy — each processor consumes just 10 watts of electricity.

Like other neural networks, this system will be put to use in pattern recognition and sensory processing roles. The Air Force wants to combine the TrueNorth’s ability to convert multiple data feeds — whether it’s audio, video or text — into machine readable symbols with a conventional supercomputer’s ability to crunch data.

This isn’t the first time that IBM’s neural chip system has been integrated into cutting-edge technology. Last August, Samsung installed the chips in its Dynamic Vision Sensors enabling cameras to capture images at up to 2,000 fps while burning through just 300 milliwatts of power.

23rd June 2017

You’ll never play ‘Super Mario’ like this

You have a lot of Super Mario games to choose from, but you’ll probably never be able to play one of the most fun versions out there. That’s because it was created as an unofficial augmented reality game by developer Abhishek Singh for the Microsoft HoloLens. It’s a first-person AR game, to be exact, so you’ll literally have to walk and jump around to avoid virtual pipes, step on Goombas and chase mushrooms. Singh told CNET that the thought of recreating a whole Super Mario Bros. level struck him while learning the basics of HoloLens development, because why not?

Singh used Unity 3D to create the level and recorded the video below entirely through HoloLens without post-production. He said the hardest part of the process was tweaking the game to work in a large outdoor environment, since HoloLens wasn’t exactly designed for physically big games like that. We say all that effort’s worth it, especially if you can find a Mario (or Luigi) costume to complete the experience.

Obviously, Singh can’t release the game due to copyright reasons, though CNET says he’s considering giving the code to other gamemakers. As for the rest of us? Well, we at least have Super Mario Odyssey to look forward to.

23rd June 2017

Straight Outta Tokyo: The battle to make ‘Project Rap Rabbit’

“Now let me welcome everybody to the wild, wild West.
A state that’s untouchable like Eliot Ness.”

It’s rare for a video-game developer to rap during an interview. It’s rarer still for him to recite a Tupac track with perfect pitch and cadence. But that’s Keiichi Yano, the Tokyo-based game designer behind cult classics Gitaroo Man and Osu! Tatakae! Ouendan, better known as Elite Beat Agents in the West. He loves music and will happily talk for hours about jazz, electronica or the intricacies of mumble rap.

His latest game, Project Rap Rabbit, fell woefully short of its Kickstarter goal this week. I met Yano a few days prior, during the chaos of E3, when it already seemed inevitable the campaign would fail. We talked about the title, its development and how he might proceed without public funding. To my surprise, Yano was unfazed by the Kickstarter’s fate and hinted that there might be another way to bring the game to market. “I can’t comment on anything we’re doing right now or anybody that we’re talking to. But yeah, I hope we can get this out one way or another.”

Yano’s latest project is a wildly ambitious music game about Toto-Maru, a rabbit with the ability to change the world through rhythm and rhyme. Like Gitaroo Man, it’s a title that treats music as both a gameplay and storytelling device. Whereas Rock Band is essentially a virtual jukebox, allowing you to perform your favorite songs, Project Rap Rabbit is an original interactive musical. The tracks are enjoyable to play through, requiring precision strategy and timing, but they also reinforce and accentuate the narrative, underlining key conversations and conflicts.

The game is set in an alternate version of feudal Japan, where anthropomorphic animals roam the streets. When the planet is struck by a mysterious calamity, many citizens are forced to find new homes. While some embrace the movement of people and the diversity it brings, many resent it, creating a social divide not too dissimilar to our own reality. Toto-Maru’s quest is simple: to bring the people back together, restoring peace and prosperity in the process.

Yano is working on the game with Masaya Matsuura, the creator of the colorful and eccentric PlayStation title PaRappa the Rapper. The pair met 18 years ago, before Yano embarked on his own rhythm game for the PlayStation 2. “I first approached him when I started developing Gitaroo Man because I had to meet the man who was the source of all this, right?” Yano recalls with a chuckle. They’ve kept in touch since and, on several occasions, considered collaborating. But it never panned out, due to a mixture of factors — financial, technological and cultural.

So, for years, they would meet once or twice a week in Tokyo and discuss what they were working on. “Every time I see him, it always feels fresh to me,” Yano explains. “Because he’s a very progressive guy. He might be thinking one thing one year, and then he’ll be completely someplace else in another. So it’s always just fun to catch up with him and get updates on what he’s thinking.”

The project started with PQube, a small indie game publisher based in Letchworth, a leafy town 40 miles north of London. The company asked Yano whether he would like to make a new game, and he, in turn, reached out to Matsuura. The Gitaroo Man developer knew he wanted to “augment” the project with “some other force” but hadn’t considered Matsuura until an early brainstorming session with PQube. When the idea was brought up, he quickly messaged his old friend on Facebook. “‘Hey, there’s a chance [that we can do a new music game], what do you want to do?'” Matsuura was intrigued and the pair set up a meeting face-to-face.

“It didn’t feel like a reunion at all,” Yano recalls, “it was more about, ‘Let’s explore new ideas and new ways of thinking about things.'” Almost immediately, the two developers found common ground. They were interested in similar ideas, both narratively and from a gameplay perspective, which quickly led to an agreement. Matsuura joined iNiS, the video game studio Yano co-founded in 1997 (it stands for “infinite Noise of the inner Soul”) to help lead the project. With a 10-person team, the pair began formalizing what the story and mechanics would be.

For years, Yano and Matsuura have dreamed of a music game that allows the player to be more expressive. In the past, when they discussed potential collaborations, it was often about music manipulation, tracks that would change tempo depending on your performance or branch into different styles at the press of a button. Some of these concepts have since been explored, but at the time, they were wholly original. Both designers craved an experience in which the player could feel she was creating something truly original and personal in real-time.

“How can music be more interactive and play a more defining role rather than be just, I dunno, the base layer that everything goes on top of?” Yano said. “Because that’s what modern music games do today, right? It’s all essentially supported by the music itself. And the music itself doesn’t change, because they’re usually songs that you and I both know. So you’re just building gameplay mechanics on top of that.” Instead, Yano wanted the music to be driven by the gameplay.

“[Matsuura] and I were both musicians and instrumentalists, so we really understand and love the interactiveness, if that’s a word, of musical instruments,” Yano said. “Because that’s the coolest thing, right? It’s cool to press something and then suddenly the sound is just … awesome. You’re immersed in that, and there’s a feeling against that. So that’s what we’re always trying to do.”

“We really understand and love the interactiveness, if that’s a word, of musical instruments.”

That’s easier said than done. Mainstream video games need to be approachable and easy to understand. That restricts the number of options you can give the player at any one time. Push too far toward realism, for instance, and you’ll end up with a piece of professional audio software. Go too far the other way and you’ll make a thoroughly enjoyable but creatively limiting title like Guitar Hero. “On some level, you need to virtualize the experience so that it’s still entertainment,” Yano adds. “But at the same time, let the player feel like they’re making important choices.”

To that end, Yano and Matsuura developed a rap-battle simulator. Project Rap Rabbit is split into two phases: call and response, which mimics how lyricists spar in real life. As your opponent tries to embarrass you, the game will highlight “focus words” that make up the bulk of his argument. A mood wheel will then show up in the corner of the screen, giving you time to choose a counter-rapping style. Coerce, joke, boast or laugh — it’s up to you. During the response phase, you’ll be asked to press buttons rhythmically with the beat and hit specific triggers when the focus words appear in your own lyrics.

Enemies will be susceptible to different rapping styles. As the difficulty ramps up, these weaknesses will change midbattle. You’ll need to read the situation and, at certain junction points, change your strategy in order to deal extra damage. Toto-Maru will also have a skill tree, similar to conventional role-playing games, so you can define his strengths and shortcomings as a rapper. It all adds to the game’s depth, which far outstrips Gitaroo Man and PaRappa the Rapper. Expert players, for instance, will learn to combo by quickly alternating between rap styles, or using the turntable-inspired sample technique that requires double, triple and quadruple-tapping specific focus words.

If all goes to plan, Project Rap Rabbit will have multiplayer too. Yano wants the game to be technical and competitive — the musical equivalent of Street Fighter or Tekken. So, unlike Rock Band, which offers a simple score chase, Project Rap Rabbit will put two players head-to-head. “So it’s all about, ‘If you do this, I’m going to counter with this, and then if you’re going to counter with this, I’m going to counter with something else.'” That’s why the call and response phases are so crucial. Like a high-speed game of chess, top players will need to plan multiple moves ahead.

Early in the project, Yano and Matsuura talked about Japan and its “hidden” history. Certain periods, Yano explains, were largely undocumented and raise questions about Japanese culture and the influence of outside forces. You can find paintings and patterns, he says, that feel out of place for their particular time period or reference styles that first blossomed in other countries.

In particular, the pair were interested in the story of Yasuke, a black samurai from Africa. While his origins are shrouded in mystery, most believe he was brought over as a slave in 1579. He quickly became a local sensation, however, which earned him an audience with the hegemon and warlord Oda Nobunaga. Yasuke impressed and was eventually hired as Nobunaga’s retainer and weapon bearer. His life as a samurai was cut short when Nobunaga was attacked and forced to commit seppuku by his general, Akechi Mitsuhide, in 1852, following a coup.

“Again, it’s the discovery of this hidden history that not a lot of people know,” Yano says.

Project Rap Rabbit is a fantastical, offbeat attempt at filling in these gaps. Toto-Maru started as a human; a rapping samurai with an eye-catching kimono. But as Yano and Matsuura developed the story, which centers around diversity and inclusion, they realized the game needed a friendlier, more-approachable hero. By chance, the team’s lead artist had started drawing a rabbit. It immediately caught Yano’s attention. “I said, ‘That’s interesting! That kind of works!'” The artist developed the idea overnight, and it eventually became the key art for the website, Kickstarter and teaser trailer. “In one fell swoop, we had the world and the protagonist and — I would not say the father or teacher figure, but the authority figure — all wrapped up in this one piece of art,” Yano says.

The animals weren’t enough, though. The team wanted to imbue its version of feudal Japan with some modern, fresh ideas. It turned to anime like Spirited Away, the hit fantasy film by Studio Ghibli, and Samurai Champloo, which combined Edo-era Japan with kinetic hip-hop music and culture. Yano started drawing drones in the sky and liked how they looked against the game’s existing artwork. It reminded him of the classic Ukiyo-e art style, which contemporary artists have started adopted to portray current and futuristic scenes.

Rap was then a natural genre to explore. “A lot of people think it’s because we’re creating some PaRappa spiritual sequel. That was, actually, really more secondary. It was more about the fact that we had a message we wanted to convey, and rap just seemed like a really good vehicle to do that.”

There are many types of rap music, all of which will be represented in the game. In general, however, it won’t sound as “homey” as PaRappa the Rapper, in order to reflect the 20 years that have passed since Matsuura’s game came out. “Rap has evolved; it’s a really big part of our mainstream culture now,” Yano says. “It’s in all forms of pop music, and it’s obviously an expressive instrument in and of itself. Not to mention there’s a whole culture surrounding rap battles and street rap. So with us doing a rap game, and all that history behind us, it’s going to be a different sound to PaRappa.”

All of these ambitions have been overshadowed by the team’s Kickstarter. The way Yano describes it, crowdfunding was the only option. Pqube was involved in the project but didn’t have the resources to fund all of its development. They needed cash and the public’s support to continue. But the campaign was criticized for its lack of gameplay footage and some strange stretch goals, one of which required $4.96 million to make a version for the Switch. The stretch goals were later reworked, but by that time, the damage had been done. People were excited about the project, but it didn’t have the momentum to reach its goal.

“It’s clear that there were things we could have done better,” Yano says. “And that was a good learning experience.”

He admits that “with 20-20 hindsight,” the team probably showed its game a little too early. “But I actually don’t regret making that decision, because it allowed us to engage with a community at a time when we weren’t 100 percent sure there would be a community.”

It had been so long since Gitaroo Man and Project Rap Rabbit, after all. They knew there was an audience for music games, but their particular style, which blends story and gameplay, felt like a gamble. “Engaging with your community very early is always a scary thing because you’re still working on stuff. But I actually had a lot of fun with it. Man, things like, all of the fan art that came through. It was just good to hear a lot of feedback early on around what people expected from us, and obviously there was some amount of things that we channeled through that exchange, and that we reflected overall [in the game] as well.”

Still, the campaign finished with $205,000, nowhere near its $1.08 million Kickstarter goal. That was not the original objective.

“Crowdfunding in the modern day, it’s a very tough place. It requires certain things to happen even before you start the campaign. And you know, we would have probably been better off doing some things that we just weren’t able to, for one reason or another.” Yano seems upbeat, however. He talks about the game with a passion and conviction that suggests its release is an absolute certainty. The collaboration with Matsuura, the ideas underpinning the music and story. I have a feeling there’s something he’s not telling me.

“We thank everybody that supported us, regardless of the final outcome on Kickstarter. We’re very thankful to everybody who supported it. I loved the fan art and everything, and yeah, we’re going to try to get this out one way or another. So please stay tuned for updates. We’ll have more as we have more!”

Make of that what you will.

23rd June 2017

Nintendo’s 3DS isn’t dead, but it is trapped in the Switch’s shadow

Earlier this year, Nintendo announced a brand-new console, a hybrid portable device that serves as both a portable entertainment machine and a game system for the living room. At a glance, it looked great — but some criticized the Nintendo Switch for having “nothing to play” except the new Legend of Zelda game, of course.

Nintendo’s E3 show served as a strong answer to those critics: Between Super Mario Odyssey, the promise of a new Pokemon game, new Xenoblade, Yoshi and Kirby titles and a Switch port of Rocket League, Nintendo gave buyers every reason to pick up its latest portable console. At the same time, it gave fans almost no reason to pick its other handheld device. If you don’t already own a 3DS, you’re probably never going to buy one now.

This wasn’t the plan — at least not publically. After Nintendo revealed that the Switch was a hybrid portable, the obvious question bubbled to the surface: Is the new console going to replace the 3DS? The company said “no,” emphatically, and pushed out a short list of new releases that will keep 3DS owners happy in the short term.

Indeed, the 3DS has since seen the release of Fire Emblem: Shadows of Valentia, Poochy & Yoshi’s Wooly World. Revamped Pokemon games and a Pikmin spin-off are on the horizon too — but the company’s E3 offerings were almost completely devoid of new announcements for the stereoscopic handheld. In total, the company revealed just three new 3DS games for the show: a remake of Mario and Luigi: Superstar saga, a reimagining of the second Metroid game and a fast-paced sushi puzzle game.

Taken in a void, those all sound great. Metroid is a franchise that’s been dormant for far too long, and Mario & Luigi: Superstar Saga + Bowser’s Minions looks like a solid update to the GBA classic — but in the context of the company’s massive outpouring of Switch support, it feels a little discouraging. Nintendo didn’t fib when it promised to support the 3DS for the foreseeable future, but the handheld’s upcoming releases are launching without fanfare, and seem to rely heavily on games that were previously announced.

Recent and upcoming releases like Ever Oasis and Hey! Pikmin weren’t even mentioned during the show, and details about Pokemon Ultra Sun and Moon were scarce. The 3DS version of Fire Emblem Warriors wasn’t shown off either.

Sure, Nintendo hid a Layton’s Mystery Journey: Katrille and the Millionaire’s Conspiracy pop-up cafe in downtown LA, but that game will launch on smartphones before hitting Nintendo’s console. There’s obviously a fair amount of things to play down the line, but almost none of it was part of Nintendo’s show event. If it wasn’t for brief 3DS showcases sprinkled throughout Nintendo’s all-day Treehouse livestreams, the stereoscopic console would have been all but absent from E3.

This isn’t a bad thing, necessarily, but it’s all context that colorizes Nintendo’s previous statements about the future of the 3DS. It’s true, the Switch isn’t replacing the 3DS — at least not yet — but the lack of games showcased at E3 shows that the family of handhelds really isn’t Nintendo’s priority. I believe Reggie Fils-Aime when he says they’re going to “support” the 3DS through 2018, but that probably just means churning out the last of the games in production for it, very few of which, if any, will probably be first-party developed titles.

If you already own a 3DS or 2DS device, this is ultimately good news. It means that you’ll have at least another 18 months of play out of the device — but if you’ve been thinking about picking one up, it’s a reason to step back. With few exceptions, the 3DS library as it is today is all new buyers can expect from the console. If you want to experience the best of Nintendo’s franchises going forward, you’ll want to look at Nintendo’s newer portable. That’s also a good thing: If E3 showed us anything, it’s that the Nintendo Switch is going to have a great first year.

23rd June 2017

Amputees control avatar by imagining moving their missing limbs

Sam Hodgson/New York Times/Redux/eyevine

People who have had amputations can control a virtual avatar using their imagination alone, thanks to a system that uses a brain scanner.

Brain-computer interfaces, which translate neuron activity into computer signals, have been advancing rapidly, raising hopes that such technology can help people overcome disabilities such as paralysis or lost limbs. But it has been unclear how well this might work for people who have had limbs removed some time ago, as the brain areas that previously controlled these may become less active or repurposed for other uses over time.

Ori Cohen at IDC Herzliya, in Israel, and colleagues have developed a system that uses an fMRI brain scanner to read the brain signals associated with imagining a movement. To see if it can work a while after someone has had a limb removed, they recruited three volunteers who had had an arm removed between 18 months and two years earlier, and four people who have not had an amputation.


googletag.cmd.push(function() { googletag.display(‘mpu-mid-article’); });

googletag.cmd.push(function() { googletag.display(‘video-mid-article’); });

While lying in the fMRI scanner, the volunteers were shown an avatar on a screen with a path ahead of it, and instructed to move the avatar along this path by imagining moving their feet to move forward, or their hands to turn left or right. The people who had had arm amputations were able to do this just as well with their missing hand as they were with their intact hand. Their overall performance on the task was almost as good as of those people who had not had an amputation.

“Although the amputees’ performance is a little bit behind the control group, the big picture shows they are almost the same level, and still using the missing arm in their brain,” says Cohen, who presented the research at the IEEE EMBS Conference on Neural Engineering last month.

Because the system requires a person to be inside a brain scanner, it would not be possible to use it outside a lab. But Cohen thinks that a new technology called functional near infrared spectroscopy will make it possible to read the same brain signals with portable devices. This may lead to new ways for people who have had limbs removed to control prosthetic devices.

But Dario Farina, of Imperial College London, doesn’t think such a system is likely to be very useful for amputees. “There are alternative techniques that are far superior for prosthetic control,” he says.

The fMRI interface only distinguished four commands: forward, stop, left and right. Prosthetic controllers that work by detecting muscle signals at the stump of the severed limb can distinguish more commands, respond more quickly, and allow the user to control the force or speed.

Farina thinks Cohen’s system could be more useful for locked-in patients, who have no means of communicating except via brain signals. “For other types of patients, this is a good performance, which is promising,” he says.

Read more: Amputees control virtual prosthetic arm using nerve signals


More on these topics:

23rd June 2017

Uranus’s crooked, messy magnetic field might open and shut daily

Universal History Archive/Getty Images

Uranus has the weirdest magnetic field in our solar system, and it just got weirder. A new model suggests that the edge of its magnetic field bubble could be slamming open and shut every day.

Most of the planets in our solar system rotate around roughly similar axes, spinning in the same plane as their orbit. Their magnetic fields are aligned with these axes, with field lines emerging from the centres of the planets near their north and south poles and wrapping them in magnetospheres – protective bubbles of magnetism.

Uranus is not like most of the planets. It rotates on its side, tilted almost 98 degrees from the plane of its orbit around the sun. The axis of its magnetic field is tilted too, at a 59-degree angle from the rotational axis. The magnetic field is also off-centre, with the field lines emerging about a third of the way toward the south pole.


googletag.cmd.push(function() { googletag.display(‘mpu-mid-article’); });

googletag.cmd.push(function() { googletag.display(‘video-mid-article’); });

All of this makes Uranus’s magnetosphere a total mess. “As it is tumbling around, the magnetosphere’s orientation is changing in all sorts of directions,” says Carol Paty at the Georgia Institute of Technology in Atlanta.

To study the effects of this tumbling, Paty and her student Xin Cao created a model of the magnetosphere and its interactions with the solar wind, a stream of charged particles blown out by the sun.

The magnetosphere acts as a barrier to the solar wind: when the two are moving in the same direction, the solar wind slides off it like water off a duck’s back. But just as when water hits a duck’s feathers from the tail end, the duck gets wet, so when the solar wind blows toward Uranus at the right angle, the planet’s magnetic field lines up with the solar wind’s and lets some particles flow through.

This process, called magnetic reconnection, occurs occasionally near Earth’s poles, where the influx of particles from the solar wind can lead to intensified auroras. On Uranus, Paty and Cao found that it should happen every single day (roughly 17 Earth hours), switching the magnetosphere’s protection on and off. This could lead to an aurora there as well.

But it’s hard to know what exactly is going on at Uranus, since the only close-up observations we have are from 1986, when the Voyager 2 spacecraft whizzed past over the course of five days.

“We caught a glimpse of a mystery when we flew by,” says Paty. “We went inside Uranus’s magnetic field and suddenly it didn’t look like Earth or Jupiter or Saturn at all.” That brief snapshot isn’t much, but Paty’s model matches it perfectly.

“It’s great that it matches as well as it does with the one fly-by that we have of Uranus,” says George Hospodarsky at the University of Iowa. “But the real test would be sending an orbiter there and getting lots of data in different conditions and seeing if the model still matches.” NASA has plans in the works to send a new probe to Uranus in 2034, but no mission has been approved yet.

Paty hopes that as our tally of exoplanets grows, a greater understanding of Uranus will help us make inferences about those distant worlds, many of which are ice giants like Uranus and Neptune. Figuring out how their magnetic fields protect them (or not) from the stellar wind could be key to determining what their surfaces are like.

It may even lead to insights closer to home, by teaching us more about Earth’s relatively simple magnetic field.

“Looking at how Uranus’s complicated, strange magnetosphere works helps us understand how all the other systems work,” says Hospodarsky. “It’s sort of like doing an experiment one way and then turning it upside down and starting again. If it still works, your theories are good.”

Journal reference: Journal of Geophysical Research: Space Physics, DOI: 10.1002/2017JA024063

More on these topics:

23rd June 2017

Pokémon Go’s new Raid Battles are live, but only for the best of the best

You can now check out Raid Battles, Pokémon Go’s cooperative multiplayer feature — but only if you’re good enough. Players level 35 and up can start taking down super-strong Pokémon together, with lower-leveled trainers getting their chance in the coming days.

Niantic announced that level 35 trainers can check out Raid Battles “at select gyms around the world” on the game’s Twitter account yesterday, which prompted a mix of responses. While most are excited for some co-op play, level 35 is high up there — the game’s level cap is 40. (We at Polygon aren’t able to check out Raid Battles quite yet, for instance, because we’re just a lowly level 18.)

This is just a temporary restriction, so anxious Pokémon Go players should keep that in mind. Once Raid Battles are available to everyone worldwide, they won’t come with any level caps, and any mix of players can work together to take on some overpowered foes. It wouldn’t hurt for those of us trailing behind the level 35-plus set to start working on our Pokémon Go game in the meantime, though.

23rd June 2017

Google’s multitasking neural net can juggle eight things at once

Tim Robberts/Getty

Deep-learning systems tend to be one-trick wonders: they’re great at the task they’ve been trained to do, but pretty awful at everything else. Now a new neural network from Google suggests that AI can be taught to multitask after all.

Most deep-learning systems are built to solve specific problems, such as recognising animals in photos from the Serengeti or translating between languages. But if you take, for instance, an image-recognition algorithm and then retrain it to do a completely different task, such as recognising speech, it usually becomes worse at its original job.

Humans don’t have that issue. We naturally use our knowledge of one problem to solve new tasks and don’t usually forget how to use a skill when we start learning another. Google’s neural network takes a tiny step in this direction, by simultaneously learning to solve a range of different problems without specialising in any one area.


googletag.cmd.push(function() { googletag.display(‘mpu-mid-article’); });

googletag.cmd.push(function() { googletag.display(‘video-mid-article’); });

The neural network from Google Brain – one of the search giant’s deep-learning teams – learned how to perform eight tasks, including image and speech recognition, translation and sentence analysis. The system, called MultiModel, is made up of a central neural network surrounded by subnetworks that specialise in specific tasks relating to audio, images or text.

Although MultiModel did not break any records for the tasks it attempted, its performance was consistently high across the board. With an accuracy score of 88.6 per cent, its image-recognition abilities were only around 9 per cent worse than the best specialised algorithms – matching the abilities of the best algorithms in use five years ago.

The system also showed other benefits. Deep-learning systems usually need to be trained on large amounts of data to perform a task well. But MultiModel seems to have come up with a neat way of sidestepping that, by learning from data relating to a completely different task.

The network’s ability to parse the grammar of sentences, for example, improved when it was trained on a database of images, even though that database had nothing to do with sentence-parsing.

Sebastian Ruder at the Insight Centre for Data Analytics in Dublin, Ireland, is impressed with Google’s approach. If a neural network can use its knowledge of one task to help it solve a completely different problem, it could get better at those that are hard to learn because of a lack useful data. “It takes us closer on the way to artificial general intelligence,” he says.

Google has released the MultiModel code as part of its TensorFlow open-source project, giving other engineers a chance to experiment with the neural network and put it to the test. The network’s complexity, however, might make it difficult for researchers to work out the reason behind its multitasking skills, says Ruder.

Journal reference: arxiv.org/abs/1706.05137

More on these topics:

23rd June 2017

Some Uber employees are reportedly petitioning for Travis Kalanick to stay

There’s a petition circulating among Uber employees asking the board of directors to let Travis Kalanick return to the company, Recode reports. The email going around talks about how Kalanick is “critical” to the company’s future success. It ultimately asks employees to show their support for Kalanick and push for him to get reinstated in an operational role.

Update 10:45am PT: The letter, which was signed by over 1,100 employees (Uber employs about 14,000 people), has since been sent to the board, Axios reports.

“As the folks who’ve actually worked alongside Travis for years to help create Uber from nothing, we are extremely disappointed by the short-sightedness and pure self-interest demonstrated by those who are supposed to protect the long-term interests of our company,” the letter to the board reportedly states.

“Yes, Travis is flawed, as we all are. But his passion, vision, and dedication to Uber are simply unmatched,” the letter goes on to say. “We would not be here today without him, and believe he can evolve into the leader we need. He is critical to our future success.”

Earlier this week, Uber’s investors pressured Kalanick to step down from his role as chief executive officer. Kalanick, who complied with the request, had already agreed to take a leave of absence, but that was not enough for some of Uber’s shareholders.

After the news broke, Kalanick said in a statement to The New York Times that he loves Uber “more than anything in the world.” He went on to say that he “accepted the investors request to step aside so that Uber can go back to building rather than be distracted with another fight.”

In response to my request about the petition, an Uber spokesperson sent me what the executive team communicated to employees yesterday.

“As you’d expect, the emotions around Travis’ decision are intense. We understand that, and we want all of you to know that he did not make this decision lightly. Stepping back now was his way of putting Uber first, as he always has. Travis gave more to this company than anyone. He had a deep and meaningful impact on countless numbers of people at Uber and around the world, and for that, we will forever be grateful.”

23rd June 2017

As Uber’s value slips on the secondary market, Lyft’s is rising

It’s been happening for months. The value of Uber’s shares has been falling on the secondary market, hammered by a barrage of press attention paid to its real and perceived misdeeds.

That slip is widely seen as the reason Uber investors strong-armed CEO Travis Kalanick out of his role as CEO on Tuesday night. As numerous sources confirmed to us yesterday (and The Information first reported in late April), Uber is right now valued at roughly $50 billion by secondary shareholders — a far cry from the $68 billion that its primary investors have assigned it. Such a fall is especially notable given that last year, secondary investors were willing to pay full freight — even a premium — for any Uber shares they could lasso.

Meanwhile, Lyft’s stock is on the rise. Specifically, say our sources, the typical 20 percent discount assigned to shares by secondary purchasers has, in Lyft’s case, dropped to between 13 and 9 percent, as buy-side interest grows and existing shareholders hang on for the ride. “We’ve definitely seen pricing in Lyft go up,” says one source who asked not to be named in an article about related trades.

“Part of that is the clouds around Uber have made Lyft relatively more attractive,” says this person. But that rise is also a function of Lyft’s recent round of fundraising, he says, noting that in April, Lyft closed on $600 million in fresh funding, at a $7.5 billion valuation.

It’s hard to know if these trend lines will continue, obviously. Much depends on how quickly Uber is able to fill out its executive ranks and with whom.

But sharks are circling, with prospective buyers trying to gauge fear in the market — and how much it buys them.

One source says she saw a bid indication — meaning an expression of interest — in purchasing Uber shares at a $40 billion valuation yesterday.

That was a first, says this source, and no deal was transacted at that price, as far as she knows. In fact, unlike with Lyft — which historically has been willing to approve employee transfers of secondary shares and only recently instituted more restrictions on those transactions — any data points regarding  Uber should be “taken with a grain of salt,” observes Shriram Bhashyam, the co-founder and general counsel at EquityZen, a marketplace of pre-IPO shares.

First and foremost, because Uber has famously tight transfer restrictions and policies around secondary trading, any conclusions are “based on thin volume,” he says.

Not all trades in Uber are share transfers, he further notes. Because of those transfer restrictions, some Uber trades are transfers of interests in special purpose vehicles — pop-up funds, essentially — that hold shares, and those include a discount factor separate and apart from the governance and culture issues plaguing the company.

Either way, it seems logical that Uber’s valuation will drop further still before it rebounds, and not just on the secondary market. All things considered, a down round seems all but inevitable right now.

“[The secondary market] is not like a public market where you see trades go on and off,” says Santosh Rao, head of research at the investment bank Manhattan Ventures Partners. “It’s an opaque market, and it’s too early to tell [where Uber goes from here.] Still, he says, “I think people will be cautious. I think people who wanted to buy are holding off.”

Given recent weeks in particular, they can’t help but ask themselves: How much is Uber worth now?

23rd June 2017

HEBI is trying to make building custom robots as easy as playing with LEGO

The X-Series Actuator doesn’t look like much. Actually, if I’m being honest, it kind of looks like a red metal scotch tape dispenser with ribbed sides and a couple of ethernet ports. The product is scattered all over HEBI’s one-room Pittsburgh office in various states of disarray. The palm-sized metal component is the startup’s primary product — its entire reason for existing, really. The actuator’s unassuming profile hides a lot of impressive technology that has helped make the three-year-old company a rising star in the city’s bustling robotics startup community.

Its capabilities come into clearer focus as you look around the room at a number of wildly diverse robots that use the little red actuator as a sort of connective tissue — a sort of robotic knee or elbow joint. There’s a grasping arm, a milling machine and a few other half-concocted builds that look like robotic rejects from the Island of Misfit Toys.

Co-founder Dave Rollinson introduces us to Igor, a strange and skinny robot that balances on two wheels like a Segway. On top of its square frame are a pair of long arms that arc down in an L-shape, each with a circular paddle for hands. With very light controls, the robot can clasp and pick up objects. It’s not the most graceful robot we’ve seen on our three-day trip to the City of Bridges, but it’s a perfect example of how the company’s product can be used to quickly piece together a complex robotic prototype. It’s kind of like an Erector Set for grownups with computer science degrees.

“A lot people think that it’s just motors and gears,” says Rollinson. “But there’s a lot more required to do it. There are a lot of sensors and a lot of embedded control to make the joint go where you want it to. The key thing that we build into all of our parts is the ability to control force.”

The company, like most of Pittsburgh’s thriving robotics community, began life at nearby Carnegie Mellon. The actuator has its origins in CMU’s snake robot, a modular mechanical serpent that’s proven to be one of the school’s most enduring projects. In fact, we first looked at the ‘bot back in 2008, when it was still in its earliest stages.

Since then, it’s proven a diverse and robust project — though, even with the university’s aggressive approach to spinning off startups, it hasn’t been easy to monetize. The nascent company flirted with the idea of positioning it as a search and rescue robot, touting its ability to squeeze into pipes and other tight spaces. Ultimately, however, it was the snake’s parts that gave rise to HEBI.

“We were making these snakes that were made up of a bunch of different modules that were chained up together,” says Rollinson, who, along with the rest of the founding team was a member of the school’s Biorobotics lab. “We realized what we had was the building blocks of a custom system. We decided to make a company dedicated to making building these custom robots as easy as playing with LEGO.”

The modules are assembled and tested by HEBI’s eight-person staff. Demand is still manageable, but the company’s LEGO-like approach to robot building has made it a hit in Pittsburgh’s tight-knit robotics community. The sophisticated underlying technologies could eventually wind up in industrial robots, which would benefit from their ability to control force, making it safer for them to interact with factory workers. In the meantime, however, the X-Series is primarily finding success as a prototyping tool.

For now, the company’s model is still pretty limited. The company’s limited online distribution model is primarily aimed at startups, universities and research facilities. As cool as it would be to build your own robot at home, you’re not going to be able to go and pick up the actuator at a Best Buy like a Arduino board any time soon. The company’s still limited by size and the product is probably cost prohibitive for your run of the mill maker. If you’re looking to mock up a prototype of a robot for future product, HEBI’s offering may be right up your alley.

“If someone is trying to make a robot to walk old people through a nursing home or something, they may build prototype systems and get things up and running real quickly,” says Rollinson. “We’re creating the tools that people will build on top of. So if there is an application that really slots on top of our focus and vision, we will pursue an actual system, but right now we’re focused on giving the right set of tools to help other people level up.”