Google workers around world protest harassment, inequality

Workers stand outside the Google offices after walking out as part of a global protest over workplace issues in Dublin, Ireland, November 1, 2018. REUTERS/Clodagh Kilcoyne

By Graham Fahy and Angela Moon

DUBLIN/NEW YORK (Reuters) – Over 1,000 Google employees and contractors in Asia, Europe and the United States staged brief midday walk-outs on Thursday, with more expected to follow at California headquarters, amid complaints of sexism, racism and unchecked executive power in their workplace.

Hundreds of women and men filed out of Google’s office in New York City and silently walked around the block for about 10 minutes around 11:00 a.m. ET. A few held sheets of paper with messages including “Respect for women.”

Two blocks away, a larger crowd of people that appeared to number a thousand or more, including Google employees and New Yorkers not working for the company, filled a small park. Some held larger signs than those at the Google office, with more confrontational messages including “Time’s up Tech.”

“This is Google. We solve the toughest problems here. We all know that the status quo is unacceptable and if there is any company who can solve this, I think it is Google,” said Thomas Kneeland, a software engineer who said he has been at Google for three years.

Google employees have been getting a lot of emails from managers and colleagues to participate in the walkout recently, he said. Just around 11 a.m., people started forming groups to leave the building. “We had engineers on our team bring their pagers since they were on-call, but that’s how we thought of the walkout. It’s important.”

The demonstrations follow a New York Times report last week that said Google in 2014 gave a $90 million exit package to Andy Rubin after the then-senior vice president was accused of sexual harassment.

Rubin denied the allegation in the story, which he also said contained “wild exaggerations” about his compensation. Google did not dispute the report.

The report energized a months-long movement inside Google to increase diversity, and improve treatment of women and minorities.

In a statement late on Wednesday, the organizers called on Google parent Alphabet Inc to add an employee representative to its board of directors and internally share pay-equity data. They also asked for changes to Google’s human resources practices intended to make bringing harassment claims a fairer process.

Google Chief Executive Sundar Pichai said in a statement that “employees have raised constructive ideas” and that the company was “taking in all their feedback so we can turn these ideas into action.”

GLOBAL ACTION

Hundreds more filed out of its European headquarters in Dublin shortly after 1100 local time, while organizers shared photographs on social media of hundreds more leaving Google offices in London, Zurich, Berlin, Tokyo, and Singapore.

Irish employees left a note on their desks that read: “I’m not at my desk because I’m walking out with other Googlers and contractors to protest sexual harassment, misconduct, lack of transparency, and a workplace culture that’s not working for everyone,” national broadcaster RTE reported.

Google employs 7,000 people in Dublin, its largest facility outside the United States.

The dissatisfaction among Alphabet’s 94,000 employees and tens of thousands more contractors has not noticeably affected company shares. But employees expect Alphabet to face recruiting and retention challenges if their concerns go unaddressed.

Much of the organizing earlier this year was internal, including petition drives, brainstorming sessions with top executives and training from the workers’ rights group Coworker.org.

Since its founding two decades ago, Google has been known for its transparency with workers. Executives’ goals and insights into corporate strategy have been accessible to any employee.

But organizers said Google executives, like leaders at other companies affected by the #metoo movement, have been slow to address some structural issues.

“While Google has championed the language of diversity and inclusion, substantive actions to address systemic racism, increase equity, and stop sexual harassment have been few and far between,” organizers stated.

They said Google must publicly report its sexual harassment statistics and end forced arbitration in harassment cases. In addition, they asked that the chief diversity officer be able to directly advise the board.

(Additional reporting by Padraic Halpin in Dublin, Paresh Dave in San Francisco, editing by Larry King and Nick Zieminski)

Facebook, Google to tackle spread of fake news, advisors want more

FILE PHOTO - Commuters walk past an advertisement discouraging the dissemination of fake news at a train station in Kuala Lumpur, Malaysia March 28, 2018. REUTERS/Stringer

By Foo Yun Chee

BRUSSELS (Reuters) – Facebook, Google, and other tech firms have agreed on a code of conduct to do more to tackle the spread of fake news, due to concerns it can influence elections, the European Commission said on Wednesday.

Intended to stave off more heavy-handed legislation, the voluntary code covers closer scrutiny of advertising on accounts and websites where fake news appears, and working with fact checkers to filter it out, the Commission said.

But a group of media advisors criticized the companies, also including Twitter and lobby groups for the advertising industry, for failing to present more concrete measures.

With EU parliamentary elections scheduled for May, Brussels is anxious to address the threat of foreign interference during campaigning. Belgium, Denmark, Estonia, Finland, Greece, Poland, Portugal, and Ukraine are also all due to hold national elections next year.

Russia has faced allegations – which it denies – of disseminating false information to influence the U.S. presidential election and Britain’s referendum on European Union membership in 2016, as well as Germany’s national election last year.

The Commission told the firms in April to draft a code of practice or face regulatory action over what it said was their failure to do enough to remove misleading or illegal content.

European Digital Commissioner Mariya Gabriel said on Wednesday that Facebook, Google, Twitter, Mozilla, and advertising groups – which she did not name – had responded with several measures.

“The industry is committing to a wide range of actions, from transparency in political advertising to the closure of fake accounts and …we welcome this,” she said in a statement.

The steps also include rejecting payment from sites that spread fake news, helping users understand why they have been targeted by specific ads, and distinguishing ads from editorial content.

But the advisory group criticized the code, saying the companies had not offered measurable objectives to monitor its implementation.

“The platforms, despite their best efforts, have not been able to deliver a code of practice within the accepted meaning of effective and accountable self-regulation,” the group said, giving no further details.

Its members include the Association of Commercial Television in Europe, the European Broadcasting Union, the European Federation of Journalists and International Fact-Checking Network, and several academics.

(Reporting by Foo Yun Chee; editing by Philip Blenkinsop and John Stonestreet)

U.S. tech giants eye Artificial Intelligence key to unlock China push

A Google sign is seen during the WAIC (World Artificial Intelligence Conference) in Shanghai, China, September 17, 2018. REUTERS/Aly Song

By Cate Cadell

SHANGHAI (Reuters) – U.S. technology giants, facing tighter content rules in China and the threat of a trade war, are targeting an easier way into the world’s second-largest economy – artificial intelligence.

Google, Microsoft Inc and Amazon Inc showcased their AI wares at a state-backed forum held in Shanghai this week against the backdrop of Beijing’s plans to build a $400 billion AI industry by 2025.

China’s government and companies may compete against U.S. rivals in the global AI race, but they are aware that gaining ground won’t be easy without a certain amount of collaboration.

“Hey Google, let’s make humanity great again,” Tang Xiao’ou, CEO of Chinese AI and facial recognition unicorn Sensetime, said in a speech on Monday.

Amazon and Microsoft announced plans on Monday to build new AI research labs in Shanghai. Google also showcased a growing suite of China-focused AI tools at its packed event on Tuesday.

Google in the past year has launched AI-backed products including a translate app and a drawing game, its first new consumer products in China since its search engine was largely blocked in 2010.

The World Artificial Intelligence Conference, which ends on Wednesday, is hosted by China’s top economic planning agency alongside its cyber and industry ministries. The conference aims to show the country’s growing might as a global AI player.

China’s ambition to be a world leader in AI has created an opening for U.S. firms, which attract the majority of top global AI talents and are keen to tap into China’s vast data.

The presence of global AI research projects is also a boon for China, which aims to become a global technology leader in the next decade.

Liu He, China’s powerful vice premier and the key negotiator in trade talks with the United States, said his country wanted a more collaborative approach to AI technology.

“As members of a global village, I hope countries can show inclusive understanding and respect for each other, deal with the double-sword technologies can bring, and embrace AI challenges together,” he told the forum.

Beijing took an aggressive stance when it laid out its AI roadmap last year, urging companies, the government and military to give China a “competitive edge” over its rivals.

STATE-BACKED AI

Chinese attendees at the forum were careful to cite the guiding role of the state in the country’s AI sector.

“The development of AI is led by government and executed by companies,” a Chinese presenter said in between speeches on Monday by China’s top tech leaders, including Alibaba Holding Ltd chairman Jack Ma, Tencent Holdings Ltd chief Pony Ma and Baidu Inc CEO Robin Li.

While China may have enthusiasm for foreign AI projects, there is little indication that building up local AI operations will open doors for foreign firms in other areas.

China’s leaders still prefer to view the Internet as a sovereign project. Google’s search engine remains blocked, while Amazon had to step back from its cloud business in China.

Censorship and local data rules have also hardened in China over the past two years, creating new hoops for foreign firms to jump through if they want to tap the booming internet sector.

Nevertheless, some speakers paid tribute to foreign AI products, including Xiami Corp chief executive Lei Jun, who hailed Google’s Alpha Go board game program as a major milestone, saying he was a fan of the game himself.

Alibaba’s Ma said innovation needed space to develop and it was not the government’s role to protect business.

“The government needs to do what the government should do, and companies need to do what they should do,” he said.

(Reporting by Cate Cadell; Editing by Adam Jourdan and Darren Schuettler)

New genre of artificial intelligence programs take computer hacking to another level

FILE PHOTO: Servers for data storage are seen at Advania's Thor Data Center in Hafnarfjordur, Iceland August 7, 2015. REUTERS/Sigtryggur Ari

By Joseph Menn

SAN FRANCISCO (Reuters) – The nightmare scenario for computer security – artificial intelligence programs that can learn how to evade even the best defenses – may already have arrived.

That warning from security researchers is driven home by a team from IBM Corp. who have used the artificial intelligence technique known as machine learning to build hacking programs that could slip past top-tier defensive measures. The group will unveil details of its experiment at the Black Hat security conference in Las Vegas on Wednesday.

State-of-the-art defenses generally rely on examining what the attack software is doing, rather than the more commonplace technique of analyzing software code for danger signs. But the new genre of AI-driven programs can be trained to stay dormant until they reach a very specific target, making them exceptionally hard to stop.

No one has yet boasted of catching any malicious software that clearly relied on machine learning or other variants of artificial intelligence, but that may just be because the attack programs are too good to be caught.

Researchers say that, at best, it’s only a matter of time. Free artificial intelligence building blocks for training programs are readily available from Alphabet Inc’s Google and others, and the ideas work all too well in practice.

“I absolutely do believe we’re going there,” said Jon DiMaggio, a senior threat analyst at cybersecurity firm Symantec Corp. “It’s going to make it a lot harder to detect.”

The most advanced nation-state hackers have already shown that they can build attack programs that activate only when they have reached a target. The best-known example is Stuxnet, which was deployed by U.S. and Israeli intelligence agencies against a uranium enrichment facility in Iran.

The IBM effort, named DeepLocker, showed that a similar level of precision can be available to those with far fewer resources than a national government.

In a demonstration using publicly available photos of a sample target, the team used a hacked version of video conferencing software that swung into action only when it detected the face of a target.

“We have a lot of reason to believe this is the next big thing,” said lead IBM researcher Marc Ph. Stoecklin. “This may have happened already, and we will see it two or three years from now.”

At a recent New York conference, Hackers on Planet Earth, defense researcher Kevin Hodges showed off an “entry-level” automated program he made with open-source training tools that tried multiple attack approaches in succession.

“We need to start looking at this stuff now,” said Hodges. “Whoever you personally consider evil is already working on this.”

(Reporting by Joseph Menn; Editing by Jonathan Weber and Susan Fenton)

Majority of Americans think social media platforms censor political views: Pew survey

FILE PHOTO: A young couple look at their phone as they sit on a hillside after sun set in El Paso, Texas, U.S., June 20, 2018. REUTERS/Mike Blake

By Angela Moon

NEW YORK (Reuters) – About seven out of ten Americans think social media platforms intentionally censor political viewpoints, the Pew Research Center found in a study released on Thursday.

The study comes amid an ongoing debate over the power of digital technology companies and the way they do business. Social media companies in particular, including Facebook Inc and Alphabet Inc’s Google, have recently come under scrutiny for failing to promptly tackle the problem of fake news as more Americans consume news on their platforms.

In the study of 4,594 U.S. adults, conducted between May 29 and June 11, roughly 72 percent of the respondents believed that social media platforms actively censored political views those companies found objectionable.

The perception that technology companies were politically biased and suppressed political speech was especially widespread among Republicans, the study showed.

About 85 percent of Republicans and Republican-leaning independents in the survey thought it was likely for social media sites to intentionally censor political viewpoints, with 54 percent saying it was “very” likely.

Sixty-four percent of Republicans also thought major technology companies as a whole supported the views of liberals over conservatives.

A majority of the respondents, or 51 percent, said technology companies should be regulated more than they are now, while only 9 percent said they should be regulated less.

(Reporting by Angela Moon; Editing by Bernadette Baum)

U.S. Senate advances bill to penalize websites for sex trafficking

People walk by the U.S. Capitol building in Washington, U.S., February 8, 2018. REUTERS/ Leah Millis

By Dustin Volz

WASHINGTON (Reuters) – The U.S. Senate voted 94-2 on Monday to advance legislation to make it easier to penalize operators of websites that facilitate online sex trafficking, setting up final passage of a bill as soon as Tuesday that would chip away at a bedrock legal shield for the technology industry.

The U.S. House of Representatives passed the legislation overwhelmingly last month. It is expected to be sent to and signed by President Donald Trump later this week.

The bill’s expected passage marks one of the most concrete actions in recent years from the U.S. Congress to tighten regulation of internet firms, which have drawn scrutiny from lawmakers in both parties over the past year because of an array of concerns regarding the size and influence of their platforms.

The Senate vote to limit debate on the sex trafficking legislation came as Facebook endured withering scrutiny over its data protection practices after reports that political analytics firm Cambridge Analytica harvested the private data on more than 50 million Facebook users through inappropriate means.

Several major internet companies, including Facebook and Alphabet’s Google, have been reluctant in the past to support any congressional effort to dent what is known as Section 230 of the Communications Decency Act, a decades-old law that protects them from liability for the activities of their users.

But facing political pressure, the internet industry slowly warmed to a proposal that began to gain traction in the Senate last year.

The legislation is a result of years of law enforcement lobbying for a crackdown on the online classified site backpage.com, which is used for sex advertising.

It would make it easier for states and sex-trafficking victims to sue social media networks, advertisers and others that fail to keep exploitative material off their platforms.

Some critics have warned that the measure would weaken Section 230 in a way that would only serve to help established internet giants, which possess larger resources to police their content, and not adequately address the problem.

Republican Senator Rand Paul and Democratic Senator Ron Wyden cast the only no votes.

(Reporting by Dustin Volz; Editing by Peter Cooney)

U.S. House passes bill to penalize websites for sex trafficking US

FILE PHOTO - The U.S. Capitol Building is lit at sunset in Washington, U.S., December 20, 2016. REUTERS/Joshua Roberts

By Dustin Volz

WASHINGTON (Reuters) – The U.S. House of Representatives on Tuesday overwhelmingly passed legislation to make it easier to penalize operators of websites that facilitate online sex trafficking, chipping away at a bedrock legal shield for the technology industry.

The bill’s passage marks one of the most concrete actions in recent years from the U.S. Congress to tighten regulation of internet firms, which have drawn heavy scrutiny from lawmakers in both parties over the past year due to an array of concerns regarding the size and influence of their platforms.

The House passed the measure 388-25. It still needs to pass the U.S. Senate, where similar legislation has already gained substantial support, and then be signed by President Donald Trump before it can become law.

Speaker Paul Ryan, in a statement before the vote, said the bill would help “put an end to modern-day slavery here in the United States.”

The White House issued a statement generally supportive of the bill, but said the administration “remains concerned” about certain provisions that it hopes can be resolved in the final legislation.

Several major internet companies, including Alphabet Inc’s Google and Facebook Inc, had been reluctant to support any congressional effort to dent what is known as Section 230 of the Communications Decency Act, a decades-old law that protects them from liability for the activities of their users.

But facing political pressure, the internet industry slowly warmed to a proposal that gained traction in the Senate last year, and eventually endorsed it after it gained sizeable bipartisan support.

Republican Senator Rob Portman, a chief architect of the Senate proposal, said in a statement he supported the House’s similar version and called on the Senate to quickly pass it.

The legislation is a result of years of law-enforcement lobbying for a crackdown on the online classified site backpage.com, which is used for sex advertising.

It would make it easier for states and sex-trafficking victims to sue social media networks, advertisers and others that fail to keep exploitative material off their platforms.

Some critics warned that the House measure would weaken Section 230 in a way that would only serve to further help established internet giants, who possess larger resources to police their content, and not adequately address the problem.

“This bill will only prop up the entrenched players who are rapidly losing the public’s trust,” Democratic Senator Ron Wyden, an original author of Section 230, said. “The failure to understand the technological side effects of this bill – specifically that it will become harder to expose sex-traffickers, while hamstringing innovation – will be something that this Congress will regret.”

(Reporting by Dustin Volz; editing by Sandra Maler and Lisa Shumaker)

London attacker took steroids before deadly rampage, inquest told

Police officers and forensics investigators and police officers work on Westminster Bridge the morning after an attack by a man driving a car and weilding a knife left five people dead and dozens injured, in London, Britain, March 23, 2017.

LONDON (Reuters) – The man who mowed down pedestrians on London’s Westminster Bridge before killing a police officer outside Britain’s parliament last year had taken steroids beforehand, a London court heard on Monday.

Last March Khalid Masood, 52, killed four people on the bridge before, armed with two carving knives, he stabbed to death an unarmed police officer in the grounds of parliament. He was shot dead at the scene.

It was the first of five attacks on Britain last year which police blamed on terrorism.

A submission to a pre-inquest hearing into the fatalities at London’s Old Bailey Court said there was evidence that Masood had taken anabolic steroids in the hours or days before his death.

“A more specialist pharmaceutical toxicologist … has been instructed to prepare a report addressing how steroid use may have affected Khalid Masood,” the submission by the inquiry’s lawyer Jonathan Hough said.

The hearing also heard from Gareth Patterson, a lawyer representing relatives of four of the victims, who lambasted tech firms over their stance on encryption and failing to remove radicalizing material from websites.

Patterson said families wanted answers about how Masood, who was known to the UK security service MI5, was radicalized and why shortly before his attack, he was able to share an extremist document via WhatsApp.

He said victims’ relatives could not understand “why it is that radicalizing material continues to be freely available on the internet”.

“We do not understand why it’s necessary for WhatsApp, Telegram and these sort of media applications to have end-to-end encryption,” he told the hearing at London’s Old Bailey court.

Patterson told Reuters following the hearing that he was “fed up” of prosecuting terrorism cases which featured encryption and particularly the WhatsApp messaging service.

“How many times do we have to have this?” he said.

The British government has been pressurizing companies to do more to remove extremist content and rein in encryption which they say allows terrorists and criminals to communicate without being monitored by police and spies, while also making it hard for the authorities to track them down.

However, it has met quiet resistance from tech leaders like Facebook, Google and Twitter and critics say ending encryption will weaken security for legitimate actions and open a back door for government snooping.

Samantha Leek, the British government’s lawyer, said the issues over encryption and radicalization were a matter of public policy and too wide for an inquest to consider.

Police say Masood had planned and carried out his attack alone, despite claims of responsibility from Islamic State, although a report in December confirmed he was known to MI5 for associating with extremists, particularly between 2010 and 2012, but not considered a threat.

Coroner Mark Lucraft said the inquest, which will begin in September, would seek to answer “obvious and understandable questions” the families might have.

(Reporting by Michael Holden; editing by Guy Faulconbridge)

In reversal, U.S. internet firms back bill to fight online sex trafficking

A computer keyboard is seen in Bucharest April 3, 2012.

By Dustin Volz

WASHINGTON (Reuters) – Major U.S. internet firms on Friday said they would support legislation to make it easier to penalize operators of websites that facilitate online sex trafficking, marking a sharp reversal for Silicon Valley on an issue long considered a top policy priority.

The decision to endorse a measure advancing in the U.S. Senate could clear the way for Congress to pass the first rewrite of a law adopted 21 years ago that is widely considered a bedrock legal shield for the internet industry.

Michael Beckerman, president of the Internet Association, said in a statement it supported a bipartisan proposal advancing in the U.S. Senate making it easier for states and sex-trafficking victims to sue social media networks, advertisers and others that fail to keep exploitative material off their platforms.

“Important changes made to (Stop Enabling Sex Traffickers Act) will grant victims the ability to secure the justice they deserve, allow internet platforms to continue their work combating human trafficking, and protect good actors in the ecosystem,” Beckerman said. His organization represents tech companies including Facebook, Amazon and Alphabet’s Google.

This week, the U.S. Senate Commerce Committee said it would vote next week on the bill authored by Republican Rob Portman and Democrat Richard Blumenthal.

The internet industry has fought such a change in the law for years, but now Washington is stepping up scrutiny on the sector on a range of policy issues after decades of hands-off regulation.

U.S. technology companies had long opposed any legislation seeking to amend Section 230 of the decades-old Communications Decency Act, arguing it is a bedrock legal protection for the internet that could thwart digital innovation and prompt endless litigation.

Bill negotiators agreed to make a handful of technical changes to the draft legislation, which Beckerman said helped earn support of the internet companies.

Those changes include clarity that criminal charges are based on violations of federal human trafficking law and that a standard for liability requires a website “knowingly” assisting of facilitating trafficking.

 

(Reporting by Dustin Volz; Editing by David Gregorio)

 

Social media executives to testify Nov. 1 about Russia and U.S. election

The Twitter application is seen on a phone screen August 3, 2017. REUTERS/Thomas White

WASHINGTON (Reuters) – Executives from Facebook Inc <FB.O>, Twitter Inc <TWTR.N> and Alphabet Inc’s <GOOGL.O> Google have been asked to testify about Russian meddling in the 2016 U.S. election before a House of Representatives panel on Nov. 1, a congressional aide said on Thursday.

Executives from the companies were already due to appear the same day before the Senate Intelligence Committee, which is also investigating Moscow’s alleged role in the election. .

But the aide said they had also been asked to offer testimony at a public hearing of the House Intelligence Committee.

Aides to the committee’s leaders declined comment. It is House Intelligence policy not to discuss the interview schedule.

Some U.S. lawmakers, increasingly alarmed about evidence that hackers used the internet to spread fake news and otherwise influence last year’s election, have been pushing for more information about social networks in particular.

The Senate and House intelligence committees are two of the main congressional panels probing allegations that Russia sought to interfere in the U.S. election to boost Republican President Donald Trump’s chances at winning the White House, and possible collusion between Trump associates and Russia.

Moscow denies any such activity, and Trump has repeatedly dismissed allegations of collusion.

Facebook confirmed that company officials would testify. Google and Twitter did not immediately respond to requests for comment.

(Reporting by Patricia Zengerle; Editing by Tom Brown)