Social media giant Facebook has agreed to pay more than 100 million euros ($114 million) to end a fiscal fraud dispute, Italian tax authorities said Thursday.
Italy has already drawn similar agreements from Amazon, Apple and Google, joining EU neighbours seeking a bigger tax take from multinationals previously able to use loopholes allowing the booking of profits in countries with more favourable tax regimes.
The accord aims to “end the disagreement relating to tax inquiries undertaken by the financial police (GdF) at the behest of the Milan prosecutor for the period 2010-2016,” Italy’s tax authority said in a statement.
The authority added that Facebook Italy would be “making a payment of more than 100 million euros.”
Online retail behemoth Amazon agreed on a similar deal last December while in May last year Google agreed to pay 306 million euros to end a dispute relating primarily to 2009-2013 profits booked in Ireland.
Ireland has one of the lowest corporate tax rates in the European Union.
Apple had earlier, in December 2015, agreed to make payment of more than 300 million euros on Italian-generated profits dating back to 2008. (AFP)
Source News Express
Facebook has tried to shut down ads that illegally used South African billionaire Mark Shuttleworth as its "front man" - but hours later they popped up again.
Shuttleworth’s name and picture have been posted on adverts promoting cryptocurrency scams on Facebook and on fake news websites.
The images and ads look similar to those used for a scam called QProfit System, which surfaced earlier this year. That scam claims that its designer, Jerry Douglas, was asked by Shuttleworth to develop a cryptocurrency system. It also creates the impression that Shuttleworth is out for “revenge” after losing a R250.5 million lawsuit against the Reserve Bank.
Shuttleworth, who has an estimated fortune of R9.6 billion and is CEO of the open-source operating system provider Canonical, has denied any involvement in a blog post.
“I can’t comment on whether or not Jerry Douglas promotes a QProfit System and whether or not it’s fraud. But I can tell you categorically that there are many scams like this, and that this investment has absolutely nothing to do with me. I haven’t developed this software and I have no desire to defraud the South African government or anyone else. I’m doing what I can to get the fraudulent sites taken down. But please take heed and don’t fall for these scams,” wrote Shuttleworth.
Ads for the new version of the scam, which asks for an initial fee of $250 (R3,750), have recently appeared on Facebook. Here, Shuttleworth’s face and a false testimony are used to promote a scheme for an app called Bitcoin Trader and Bitcoin Revolution. The ads take the users to different fake news websites, including a site called POIP News.
Facebook was alerted to the new scam ads last week. On Monday it responded by removing the ads.
“We do not allow adverts which are misleading or false on Facebook and encourage people to report any adverts that they believe infringe on their rights, or shouldn’t be on Facebook. As soon as these pages and ads were highlighted to us, we worked promptly to remove them and can confirm that several accounts and pages that violated our Advertising Policies have been taken down," said a spokesperson.
Facebook said on Friday that up to 50-million accounts were breached in a security flaw exploited by hackers.
The social network said it had learnt this week of the attack that allowed hackers to steal “access tokens,” the equivalent of digital keys that enable them to access their accounts.
“It’s clear that attackers exploited a vulnerability in Facebook’s code,” vice-president of product management Guy Rosen said.
“We’ve fixed the vulnerability and informed the police.
Facebook CEO Mark Zuckerberg said engineers had discovered the breach on Tuesday, and patched it on Thursday.
“We don’t know if any accounts were actually misused,” Zuckerberg said. “But this is a serious issue.”
As a precaution, Facebook is temporarily taking down the “view as” feature – described as a privacy tool to let users see how their own profiles would look to other people.
“We face constant attacks from people who want to take over accounts or steal information around the world,” Zuckerberg said on his Facebook page.
“The reality is we need to continue developing new tools to prevent this from happening in the first place.”
Facebook, Google and other tech firms have agreed a code of conduct to do more to tackle the spread of fake news, due to concerns it can influence elections, the European Commission said on Wednesday.
“Intended to stave off more heavy-handed legislation, the voluntary code covers closer scrutiny of advertising on accounts and websites where fake news appears.
“Thereby working with fact checkers to filter it out,’’ the commission said.
However, a group of media advisors criticised the companies, also including Twitter and lobby groups for the advertising industry for failing to present more concrete measures.
Brussels, with EU parliamentary elections scheduled for May, is anxious to address the threat of foreign interference during campaigning.
Belgium, Denmark, Estonia, Finland, Greece, Poland, Portugal and Ukraine are also all due to hold national elections in 2019.
Russia has faced allegations, which it denies, of disseminating false information to influence the U.S. presidential election and Britain’s referendum on EU membership in 2016 as well as Germany’s national election in 2017.
The commission told the firms in April to draft a code of practice, or face regulatory action over what it said was their failure to do enough to remove misleading or illegal content.
European Digital Commissioner Mariya Gabriel said that Facebook, Google, Twitter, Mozilla and advertising groups had responded with several measures.
“The industry is committing to a wide range of actions, from transparency in political advertising to the closure of fake accounts and we welcome this,” she said in a statement.
The steps also include rejecting payment from sites that spread fake news, helping users understand why they have been targeted by specific ads, and distinguishing ads from editorial content.
However, the advisory group criticised the code, saying the companies had not offered measurable objectives to monitor its implementation.
“The platforms, despite their best efforts, have not been able to deliver a code of practice within the accepted meaning of effective and accountable self-regulation,” the group said, giving no further details.
Its members include the Association of Commercial Television in Europe, the European Broadcasting Union, the European Federation of Journalists and International Fact-Checking Network, and several academics.
Google’s recent record €4.3 billion (£3.9 billion) fine is the latest action in a growing movement to tackle the dominance of big tech firms. Until now, most attention has been on the impact of this dominance on privacy, for example the recent Cambridge Analytica scandal that saw Facebook criticised for failing to tackle the unauthorised use of user data by a political campaigning firm.
As a result, some analysts and commentators have called for users to be given more control over their information. But this is a serious mistake.
Google and Facebook make money from their monopoly of our attention, not their access to our personal data. Even if, starting tomorrow, they had no access to our personal data for the purposes of targeting ads, they would still be dominant and hugely profitable because they can advertise to so many people, just like TV networks once were.
Limiting Facebook and Google’s access to personal data of users would make no difference to their monopoly power, or reduce the harmful effects of that power on innovation and freedom. In fact, any further controls on privacy are likely to play into the hands of the dominant firms. It would simply reinforce their monopoly position by increasing the cost of following privacy regulation and making it harder for potential competitors to enter and disrupt the market.
The true source of monopoly power
The tech giant have monopolies because of the convergence of three different phenomena. First, Google and Facebook operate as “platforms”, places where different participants connect. This is an ancient phenomenon. The market in the town square is a platform, where sellers and buyers congregate. Facebook is a platform, originally designed to connect one user with another to exchange content, though it quickly began attracting advertisers because they want to connect with the users too. Google is another platform, connecting users with content providers and advertisers.
Research shows that all platform businesses have a strong tendency to centralise a market, because the more customers they have, the more suppliers are attracted, and vice versa. As the first platform businesses in a sector grow, it becomes harder for new rivals to compete on equal terms. The initial advantages lead to entrenched monopolies and the market converges on a single or small number of platforms.
Owners of marketplaces and stock exchanges make good livings, but they are limited in their scope owing to the physical nature of their platform. But the owners of the vast online platforms are in an entirely different league, because of a second phenomenon – one of the fundamental characteristics of the digital age: infinite – costless copying.
Once someone has a single copy of a piece of digital information they can make as many copies as they wish at the touch of the button at practically no cost. Different versions of eBay, for instance, can be created for every country in the world at practically no extra cost, giving it a reach that goes far beyond a physical auction house.
Expansion is virtually free, with infinite economies of scale. So Google, Facebook and other dominant tech firms have been able to scale up their services at an unprecedented rate, and with unprecedented profitability.
But costless copying would not be so profitable if it were truly unlimited. The final component of these extraordinary businesses is their exclusive right to make the copies. Thanks to intellectual property in the form of patents and copyrights, they control the digital information at the heart of their platforms, such as the algorithms that run Google’s search engine or the software that powers Facebook. Their products and platforms, and the software and algorithms that run them are all protected by laws we have made.
This contrasts with the most famous platform of the digital age: the internet itself. The internet is a platform just like Google and Facebook except that it is open. It is open in a technical sense because its protocols and software are free for anyone to use, but it is also open socially because anyone can connect to it whatever their background or circumstances.
The internet is living proof that we can have the benefits of a single platform without it becoming a monopoly, and it stands as a testament to the creativity and innovation that this fosters.
It also holds the solution to the present monopoly problem: openness. The fact that anyone can use, implement and build on the internet’s platform is what guaranteed its free and competitive opportunities.
So how would this work with platforms like Google or Facebook? At the moment Facebook and other social networks give us platforms on which to communicate and share content with others. Facebook determines who can use its platform and how they can do so. Anyone wanting to build or adapt the platform, for example to block ads or to create a new social network, must do so with Facebook’s permission.
Such permission is rarely granted. If you’re unhappy with the platform you have little option but to reluctantly accept it or lose access. If it were open this need not be the case. Just as with the internet, you could have had one open platform that anyone could connect to and build on. Dislike the ads? Well, you can create a version that does away with these. Only want to message friends and see their photos. That could be possible, too. Openness means you are not restricted by the whims and desires of just one company.
The solution to these platforms’ monopolies is to make the software, algorithms and protocols on which they run open and free for anyone to use, build on and share. In addition, all users, competitors and innovators should have universal, equitable access to the platforms. Doing this is the only way to give everyone a stake in our digital future.
Facebook announced it will notify 800,000 people about a bug that unblocked accounts those users had previously blocked.
The bug was active between May 29 and June 5.
In a blog post, Facebook's chief privacy officer, Erin Egan, said some blocked users couldn't view posts that the person who blocked them shared with friends, but they could have seen things that person shared.
"We know that the ability to block someone is important — and we'd like to apologize and explain what happened," Egan said in the post.
When someone is typically blocked on Facebook, that person cannot view posts on your profile, chat with you on Messenger or add you as a friend. The user is also automatically unfriended. A person may want to block another user for various reasons, including after a romantic break up or due to harassment.
Facebook said 83% of users impacted by the bug had one person temporarily unblocked. A user who was unblocked during that time may have been able to talk to the person who blocked them on Messenger.
The company says the issue has been resolved, and all previous settings have been reinstated
British lawmakers want their European counterparts to quiz Facebook FB.O CEO Mark Zuckerberg about a scandal over improper use of millions of Facebook users’ data, as he will not give evidence in London himself.
Zuckerberg will be in Europe to defend the company after alleged misuse of its data by Cambridge Analytica, a British political consultancy that worked on U.S. President Donald Trump’s election campaign.
But while he will answer questions from lawmakers in Brussels on Tuesday, and is meeting French President Emmanuel Macron on Wednesday, he has so far declined to answer questions from British lawmakers, either in person or via video link.
Damian Collins, chair of the British parliament’s media committee, said on Tuesday that he believed Zuckerberg should still appear before British lawmakers.
“But if Mark Zuckerberg chooses not to address our questions directly, we are asking colleagues at the European Parliament to help us get answers - particularly on who knew what at the company, and when, about the data breach and the non-transparent use of political adverts which continue to undermine our democracy,” he said in a statement.
Last month, Facebook Chief Technical Officer Mike Schroepfer appeared before Collins’s Digital, Culture, Media and Sport Committee, which is investigating fake news.
But the lawmakers have said his testimony and subsequent written answers from the firm to follow-up questions have been inadequate.
Collins outlined deficiencies in Facebook’s answers so far in a letter to Rebecca Stimson, head of public policy at Facebook UK, which has been shared with the EU lawmakers who will quiz Zuckerberg. Collins requested a response from Facebook to his questions by June 4.
The bitter truth buried in recent headlines about how the political consulting company – Cambridge Analytica – used social media and messaging, primarily Facebook and WhatsApp, to try to sway voters in presidential elections in the US and Kenya is simply this: Facebook is the reason why fake news is here to stay.
Various news outlets, and former Cambridge Analytica executives themselves, confirmed that the company used campaign speeches, surveys, and, of course, social media and social messaging to influence Kenyans in both 2013 and 2017.
The media reports also revealed that, working on behalf of US President Donald Trump’s campaign, Cambridge Analytica had got hold of data from 50 million Facebook users, which they sliced and diced to come up with “psychometric” profiles of American voters.
The political data company’s tactics have drawn scrutiny in the past, so the surprise of these revelations came more from the “how” than the “what.” The real stunner was learning how complicit Facebook and WhatsApp, which is owned by the social media behemoth, had been in aiding Cambridge Analytica in its work.
The Cambridge Analytica scandal appears to be symptomatic of much deeper challenges that Facebook must confront if it’s to become a force for good in the global fight against false narratives.
These hard truths include the fact that Facebook’s business model is built upon an inherent conflict of interest. The others are the company’s refusal to take responsibility for the power it wields and its inability to come up with a coherent strategy to tackle fake news.
Facebook’s biggest challenges
Facebook’s first issue is its business model. It has mushroomed into a multibillion-dollar corporation because its revenue comes from gathering and using the data shared by its audience of 2.2 billion monthly users.
Data shapes the ads that dominate our news feeds. Facebook retrieves information from what we like, comment on and share; the posts we hide and delete; the videos we watch; the ads we click on; the quizzes we take. It was, in fact, data sifted from one of these quizzes that Cambridge Analytica bought in 2014. Facebook executives knew of this massive data breach back then but chose to handle the mess internally. They shared nothing with the public.
This makes sense if the data from that public is what fuels your company’s revenues. It doesn’t make sense, however, if your mission is to make the world a more open and connected place, one built in transparency and trust. A corporation that says it protects privacy while also making billions of dollars from data, sets itself up for scandal.
This brings us to Facebook’s second challenge: its myopic vision of its own power. As repeated scandals and controversies have washed over the social network in the last couple of years, CEO Mark Zuckerberg’s response generally has been one of studied naivete. He seems to be in denial about his corporation’s singular influence and position.
Case in point: When it became clear in 2016 that fake news had affected American elections, Zuckerberg first dismissed that reality as “a pretty crazy idea.” In this latest scandal, he simply said nothing for days.
Throughout the world, news publishers report that 50% to 80% of their digital traffic comes from Facebook. No wonder Google and Facebook control 53% of the world’s digital and mobile advertising revenue. Yet Zuckerberg still struggles to accept that Facebook’s vast audience and its role as a purveyor of news and information combine to give it extraordinary power over what people consume, and by extension, how they behave.
All of this leads us to Facebook’s other challenge: its inability to articulate, and act on, a cogent strategy to attack fake news.
The fake news phenomenon
When Zuckerberg finally surfaced last month, he said out loud what a lot of people were already were thinking: there may be other Cambridge Analyticas out there.
This is very bad news for anyone worried about truth and democracy. For in America, fake news helped to propel into power a man whose presidential campaign may have been a branding exercise gone awry. But in countries like Kenya, fake news can kill.
Zuckerberg and his Facebook colleagues must face this truth. Fake news may not create tribal or regional mistrust, but inflammatory videos and posts shared on social media certainly feed those tensions.
And false narratives spread deep and wide: In 2016, BuzzFeed News found that in some cases, a fake news story was liked, commented and shared almost 500,000 times. A legitimate political news story might attract 75,000 likes, comments and shares.
After Zuckerberg was flogged for his initial statements about fake news, Facebook reached out to the Poynter Institute’s International Fact-checking Network in an effort to attack this scourge. Then in January 2018, the social network said that it was going to be more discriminating about how much news it would allow to find its way into the feeds of its users. In other words, more videos on cats and cooking, less news of any kind.
The policy sowed a lot of confusion and showed that Facebook is still groping for how to respond to fake news. It was also evidence that the social network does not understand that fake news endangers its own existence as well as the safety and security of citizens worldwide –- especially in young democracies such as Kenya.
Angry lawmakers in the US and Europe, along with a burgeoning rebellion among its vast audience, may finally grab Facebook’s attention. But we will only hear platitudes and see superficial change unless Facebook faces hard truths about its reliance on data, accepts its preeminent place in today’s media ecosystem and embraces its role in fighting fake news.
Until then, we should brace ourselves for more Cambridge Analyticas.