Facebook is to be sued in Europe over the major leak of user data that dates back to 2019 but which only came to light recently after information on more than 533 million accounts was found posted for free download on a hacker forum.
Today Digital Rights Ireland (DRI) announced it’s commencing a “mass action” to sue Facebook, citing the right to monetary compensation for breaches of personal data that’s set out in the European Union’s General Data Protection Regulation (GDPR).
Article 82 of the GDPR provides for a “right to compensation and liability” for those affected by violations of the law. Since the regulation came into force, in May 2018, related civil litigation has been on the rise in the region.
The Ireland-based digital rights group is urging Facebook users who live in the European Union or European Economic Area to check whether their data was breached — via the haveibeenpwned website (which lets you check by email address or mobile number) — and sign up to join the case if so.
Information leaked via the breach includes Facebook IDs, location, mobile phone numbers, email address, relationship status and employer.
Facebook has been contacted for comment on the litigation. Update: A Facebook spokesperson said:
We understand people’s concerns, which is why we continue to strengthen our systems to make scraping from Facebook without our permission more difficult and go after the people behind it. As LinkedIn and Clubhouse have shown, no company can completely eliminate scraping or prevent data sets like these from appearing. That’s why we devote substantial resources to combat it and will continue to build out our capabilities to help stay ahead of this challenge.
The tech giant’s European headquarters is located in Ireland — and earlier this week the national data watchdog opened an investigation, under EU and Irish data protection laws.
A mechanism in the GDPR for simplifying investigation of cross-border cases means Ireland’s Data Protection Commission (DPC) is Facebook’s lead data regulator in the EU. However it has been criticized over its handling of and approach to GDPR complaints and investigations — including the length of time it’s taking to issue decisions on major cross-border cases. And this is particularly true for Facebook.
With the three-year anniversary of the GDPR fast approaching, the DPC has multiple open investigations into various aspects of Facebook’s business but has yet to issue a single decision against the company.
(The closest it’s come is a preliminary suspension order issued last year, in relation to Facebook’s EU to U.S. data transfers. However, that complaint long predates GDPR; and Facebook immediately filed to block the order via the courts. A resolution is expected later this year after the litigant filed his own judicial review of the DPC’s processes.)
Since May 2018 the EU’s data protection regime has — at least on paper — baked in fines of up to 4% of a company’s global annual turnover for the most serious violations.
Again, though, the sole GDPR fine issued to date by the DPC against a tech giant (Twitter) is very far off that theoretical maximum. Last December the regulator announced a €450,000 (~$ 547,000) sanction against Twitter — which works out to around just 0.1% of the company’s full-year revenue.
That penalty was also for a data breach — but one which, unlike the Facebook leak, had been publicly disclosed when Twitter found it in 2019. So Facebook’s failure to disclose the vulnerability it discovered and claims it fixed by September 2019, which led to the leak of 533 million accounts now, suggests it should face a higher sanction from the DPC than Twitter received.
However, even if Facebook ends up with a more substantial GDPR penalty for this breach the watchdog’s caseload backlog and plodding procedural pace makes it hard to envisage a swift resolution to an investigation that’s only a few days old.
Judging by past performance it’ll be years before the DPC decides on this 2019 Facebook leak — which likely explains why the DRI sees value in instigating class action-style litigation in parallel to the regulatory investigation.
“Compensation is not the only thing that makes this mass action worth joining. It is important to send a message to large data controllers that they must comply with the law and that there is a cost to them if they do not,” DRI writes on its website.
It also submitted a complaint about the Facebook breach to the DPC earlier this month, writing then that it was “also consulting with its legal advisors on other options including a mass action for damages in the Irish Courts”.
It’s clear that the GDPR enforcement gap is creating a growing opportunity for litigation funders to step in in Europe and take a punt on suing for data-related compensation damages — with a number of other mass actions announced last year.
In the case of DRI its focus is evidently on seeking to ensure that digital rights are upheld. But it told RTE that it believes compensation claims which force tech giants to pay money to users whose privacy rights have been violated is the best way to make them legally compliant.
Facebook, meanwhile, has sought to play down the breach it failed to disclose in 2019 — claiming it’s “old data” — a deflection that ignores the fact that people’s dates of birth don’t change (nor do most people routinely change their mobile number or email address).
Plenty of the “old” data exposed in this latest massive Facebook leak will be very handy for spammers and fraudsters to target Facebook users — and also now for litigators to target Facebook for data-related damages.
Facebook announced this morning it will begin testing a new experience for discovering businesses in its News Feed in the U.S. When live, users tap on topics they’re interested in underneath posts and ads in their News Feed in order to explore related content from businesses. The change comes at a time when Facebook has been arguing how Apple’s App Tracking Transparency update will impact its small business customers — a claim many have dismissed as misleading, but nevertheless led some mom and pop shops to express concern about the impacts to their ad targeting capabilities, as a result. This new test is an example of how easily Facebook can tweak its News Feed to build out more data on its users, if needed.
The company suggests users may see the change under posts and ads from businesses selling beauty products, fitness or clothing, among other things.
The idea here is that Facebook would direct users to related businesses through a News Feed feature, when they take a specific action to discover related content. This, in turn, could help Facebook create a new set of data on its users, in terms of which users clicked to see more, and what sort of businesses they engaged with, among other things. Over time, it could turn this feature into an ad unit, if desired, where businesses could pay for higher placement.
“People already discover businesses while scrolling through News Feed, and this will make it easier to discover and consider new businesses they might not have found on their own,” the company noted in a brief announcement.
Facebook didn’t detail its further plans with the test, but said as it learned from how users interacted with the feature, it will expand the experience to more people and businesses.
Along with news of the test, Facebook said it will roll out more tools for business owners this month, including the ability to create, publish and schedule Stories to both Facebook and Instagram; make changes and edits to Scheduled Posts; and soon, create and manage Facebook Photos and Albums from Facebook’s Business Suite. It will also soon add the ability to create and save Facebook and Instagram posts as drafts from the Business Suite mobile app.
Related to the business updates, Facebook updated features across ad products focused on connecting businesses with customer leads, including Lead Ads, Call Ads and Click to Messenger Lead Generations.
Facebook earlier this year announced a new Facebook Page experience that gave businesses the ability to engage on the social network with their business profile for things like posting, commenting and liking, and access to their own, dedicated News Feed. And it had removed the Like button in favor of focusing on Followers.
It is not a coincidence that Facebook is touting its tools for small businesses at a time when there’s concern — much of it loudly shouted by Facebook itself — that its platform could be less useful to small business owners in the near future, when ad targeting capabilities become less precise as users vote “no” when Facebook’s iOS app asks if it can track them.
Instagram today will begin a new test around hiding Like counts on users’ posts, following its experiments in this area which first began in 2019. This time, however, Instagram is not enabling or disabling the feature for more users. Instead, it will begin to explore a new option where users get to decide what works best for them — either choosing to see the Like counts on others’ posts, or not. Users will also be able to turn off Like counts on their own posts, if they choose. Facebook additionally confirmed it will begin to test a similar experience on its own social network.
Instagram says tests involving Like counts were deprioritized after Covid-19 hit, as the company focused on other efforts needed to support its community. (Except for that brief period this March where Instagram accidentally hid Likes for more users due to a bug.)
The company says it’s now revisiting the feedback it collected from users during the tests and found a wide range of opinions. Originally, the idea with hiding Like counts was about reducing the anxiety and embarrassment that surrounds posting content on the social network. That is, people would stress over whether their post would receive enough Likes to be deemed “popular.” This problem was particularly difficult for Instagram’s younger users, who care much more about what their peers think — so much so that they would take down posts that didn’t receive “enough” Likes.
In addition, the removal of Likes helped reduce the sort of herd mentality that drives people to like things that are already popular, as opposed to judging the content for themselves.
But during tests, not everyone agreed the removal of Likes was a change for the better. Some people said they still wanted to see Like counts so they could track what was trending and popular. The argument for keeping Likes was more prevalent among the influencer community, where creators used the metric in order to communicate their value to partners, like brands and advertisers. Here, lower engagement rates on posts could directly translate to lower earnings for these creators.
Both arguments for and against Likes have merit, which is why Instagram’s latest test will put the choice back into users’ own hands.
This new test will be enabled for a small percentage of users globally on Instagram, the company says.
If you’ve been opted in, you’ll find a new option to hide the Likes from within the app’s Settings. This will prevent you from seeing Likes on other people’s posts as you scroll through your Instagram Feed. As a creator, you’ll be able to hide Likes on a per-post basis via the three-dot “…” menu at the top. Even if Likes are disabled publicly, creators are still able to view Like counts and other engagements through analytics, just as they did before.
The tests on Facebook, which has also been testing Like count removals for some time, have not yet begun. Facebook tells TechCrunch those will roll out in the weeks ahead.
Making Like counts an choice may initially seem like it could help to address everyone’s needs. But in reality, if the wider influencer community chooses to continue to use Likes as a currency that translates to popularity and job opportunities, then other users will continue to do the same.
Ultimately, communities themselves have to decide what sort of tone they want to set, preferably from the outset — before you’ve attracted millions of users who will be angry when you later try to change course.
There’s also a question as to whether social media users are really hungry for an “Like-free” safer space. For years we’ve seen startups focused on building an “anti-Instagram” of sorts, where they drop one or more Instagram features, like algorithmic feeds, Likes and other engagement mechanisms, such as Minutiae, Vero, Dayflash, Oggl, and now, newcomers like troubled Dispo, or under-the-radar Herd. But Instagram has yet to fail because of an anti-Instagram rival. If anything is a threat, it’s a new type of social network entirely, like TikTok –where it should be noted getting Likes and engagements is still very important for creator success.
Instagram didn’t say how long the new tests would last or if and when the features would roll out more broadly.
“We’re testing this on Instagram to start, but we’re also exploring a similar experience for Facebook. We will learn from this new small test and have more to share soon,” a Facebook company spokesperson said.
Facebook’s self-styled ‘Oversight Board’ (FOB) has announced an operational change that looks intended to respond to criticism of the limits of the self-regulatory content-moderation decision review body: It says it’s started accepting requests from users to review decisions to leave content up on Facebook and Instagram.
The move expands the FOB’s remit beyond reviewing (and mostly reversing) content takedowns — an arbitrary limit that critics said aligns it with the economic incentives of its parent entity, given that Facebook’s business benefits from increased engagement with content (and outrageous content drives clicks and makes eyeballs stick).
“So far, users have been able to appeal content to the Board which they think should be restored to Facebook or Instagram. Now, users can also appeal content to the Board which they think should be removed from Facebook or Instagram,” the FOB writes, adding that it will “use its independent judgment to decide what to leave up and what to take down”.
“Our decisions will be binding on Facebook,” it adds.
The ability to request an appeal on content Facebook wouldn’t take down has been added across all markets, per Facebook. But the tech giant said it will take some “weeks” for all users to get access as it said it’s rolling out the feature “in waves to ensure stability of the product experience”.
While the FOB can now get individual pieces of content taken down from Facebook/Instagram — i.e. if the Board believes it’s justified in reversing an earlier decision by the company not to remove content — it cannot make Facebook adopt any associated suggestions vis-a-vis its content moderation policies generally.
That’s because Facebook has never said it will be bound by the FOB’s policy recommendations; only by the final decision made per review.
That in turn limits the FOB’s ability to influence the shape of the tech giant’s approach to speech policing. And indeed the whole effort remains inextricably bound to Facebook which devised and structured the FOB — writing the Board’s charter and bylaws, and hand picking the first cohort of members. The company thus continues to exert inescapable pull on the strings linking its self-regulatory vehicle to its lucrative people-profiling and ad-targeting empire.
The FOB getting the ability to review content ‘keep ups’ (if we can call them that) is also essentially irrelevant when you consider the ocean of content Facebook has ensured the Board won’t have any say in moderating — because its limited resources/man-power mean it can only ever consider a fantastically tiny subset of cases referred to it for review.
For an oversight body to provide a meaningful limit on Facebook’s power it would need to have considerably more meaty (i.e. legal) powers; be able to freely range across all aspects of Facebook’s business (not just review user generated content); and be truly independent of the adtech mothership — as well as having meaningful powers of enforcement and sanction.
So, in other words, it needs to be a public body, functioning in the public interest.
Instead, while Facebook applies its army of in house lawyers to fight actual democratic regulatory oversight and compliance, it has splashed out to fashion this bespoke bureaucracy that can align with its speech interests — handpicking a handful of external experts to pay to perform a content review cameo in its crisis PR drama.
Unsurprisingly, then, the FOB has mostly moved the needle in a speech-maximizing direction so far — while expressing some frustration at the limited deck of cards Facebook has dealt it.
Most notably, the Board still has a decision pending on whether to reverse Facebook’s indefinitely ban on former US president Donald Trump. If it reverses that decision Facebook users won’t have any recourse to appeal the restoration of Trump’s account.
The only available route would, presumably, be for users to report future Trump content to Facebook for violating its policies — and if Facebook refuses to take that stuff down, users could try to request a FOB review. But, again, there’s no guarantee the FOB will accept any such review requests. (Indeed, if the board chooses to reinstate Trump that may make it harder for it to accept requests to review Trump content, at least in the short term (in the interests of keeping a diverse case file, so… )
How to ask for a review after content isn’t removed
To request the FOB review a piece of content that’s been left up a user of Facebook/Instagram first has to report the content to Facebook/Instagram.
If the company decides to keep the content up Facebook says the reporting person will receive an Oversight Board Reference ID (a ten-character string that begins with ‘FB’) in their Support Inbox — which they can use to appeal its ‘no takedown’ decision to the Oversight Board.
There are several hoops to jump through to make an appeal: Following on-screen instructions Facebook says the user will be taken to the Oversight Board website where they need to log in with the account to which the reference ID was issued.
They will then be asked to provide responses to a number of questions about their reasons for reporting the content (to “help the board understand why you think Facebook made the wrong decision”).
Once an appeal has been submitted, the Oversight Board will decide whether or not to review it. The board only selects a certain number of “eligible appeals” to review; and Facebook has not disclosed the proportion of requests the Board accepts for review vs submissions it receives — per case or on aggregate. So how much chance of submission success any user has for any given piece of content is an unknown (and probably unknowable) quantity.
Users who have submitted an appeal against content that was left up can check the status of their appeal via the FOB’s website — again by logging in and using the reference ID.
A further limitation is time, as Facebook notes there’s a time limit on appealing decisions to the FOB
“Bear in mind that there is a time limit on appealing decisions to the Oversight Board. Once the window to appeal a decision has expired, you will no longer be able to submit it,” it writes in its Help Center, without specifying how long users have to get their appeal in (we asked Facebook to confirm this and it’s 15 days).
Facebook confirmed it’s testing a video speed-dating app called Sparked, after the app’s website was spotted by The Verge. Unlike dating app giants such as Tinder, Sparked users don’t swipe on people they like or direct message others. Instead, they cycle through a series of short video dates during an event to make connections with others. The product itself is being developed by Facebook’s internal R&D group, the NPE Team, but had not been officially announced.
“Sparked is an early experiment by New Product Experimentation,” a spokesperson for Facebook’s NPE Team confirmed to TechCrunch. “We’re exploring how video-first speed dating can help people find love online.”
They also characterized the app as undergoing a “small, external beta test” designed to generate insights about how video dating could work, in order to improve people’s experiences with Facebook products. The app is not currently live on app stores, only the web.
Sparked is, however, preparing to test the experience at a Chicago Date Night event on Wednesday, The Verge’s report noted.
During the sign-up process, Sparked tells users to “be kind,” “keep this a safe space,” and “show up.” A walkthrough of how the app also works explains that participants will meet face to face during a series of 4-minute video dates, which they can then follow up with a 10-minute date if all goes well. They can additionally choose to exchange contact info, like phone numbers, emails, or Instagram handles.
Facebook, of course, already offers a dating app product, Facebook Dating.
That experience, which takes place inside Facebook itself, first launched in 2018 outside the U.S., and then arrived in the U.S. the following year. In the early days of the pandemic, Facebook announced it would roll out a sort of virtual dating experience that leveraged Messenger for video chats — a move came at a time when many other dating apps in the market also turned to video to serve users under lockdowns. These video experiences could potentially compete with Sparked, unless the new product’s goal is to become another option inside Facebook Dating itself.
Despite the potential reach, Facebook’s success in the dating market is not guaranteed, some analysts have warned. People don’t think of Facebook as a place to go meet partners, and the dating product today is still separated from the main Facebook app for privacy purposes. That means it can’t fully leverage Facebook’s network effects to gain traction, as users in this case may not want their friends and family to know about their dating plans.
Facebook’s competition in dating is fierce, too. Even the pandemic didn’t slow down the dating app giants, like Match Group or newly IPO’d Bumble. Tinder’s direct revenues increased 18% year-over-year to $ 1.4 billion in 2020, Match Group reported, for instance. Direct revenues from the company’s non-Tinder brands collectively increased 16%. And Bumble topped its revenue estimates in its first quarter as a public company, pulling in $ 165.6 million in the fourth quarter.
Facebook, on the other hand, has remained fairly quiet about its dating efforts. Though the company cited over 1.5 billion matches in the 20 countries it’s live, a “match” doesn’t indicate a successful pairing — in fact, that sort of result may not be measured. But it’s early days for the product, which only rolled out to European markets this past fall.
The NPE Team’s experiment in speed dating could ultimately help to inform Facebook of what sort of new experiences a dating app user may want to use, and how.
The company didn’t say if or when Sparked would roll out more broadly.
Facebook has removed 16,000 groups that were trading fake reviews on its platform after another intervention by the UK’s Competition and Markets Authority (CMA), the regulator said today.
The CMA has been leaning on tech giants to prevent their platforms being used as thriving marketplaces for selling fake reviews since it began investigating the issue in 2018 — pressuring both eBay and Facebook to act against fake review sellers back in 2019.
The two companies pledged to do more to tackle the insidious trade last year, after coming under further pressure from the regulator — which found that Facebook-owned Instagram was also a thriving hub of fake review trades.
The latest intervention by the CMA looks considerably more substantial than last year’s action — when Facebook removed a mere 188 groups and disabled 24 user accounts. Although it’s not clear how many accounts the tech giant has banned and/or suspended this time it has removed orders of magnitude more groups. (We’ve asked.)
Update: We understand that the regulator has focused on the removal of groups trading misleading/fake reviews, rather than individual accounts — as banned or suspended users are able to create new profiles, whereas removing the group in which fake reviews are being traded is seen as a more effective way to impact and deter the activity.
Facebook was also contacted with questions but it did not answer what we asked directly, sending us this statement instead:
“We have engaged extensively with the CMA to address this issue. Fraudulent and deceptive activity is not allowed on our platforms, including offering or trading fake reviews. Our safety and security teams are continually working to help prevent these practices.”
Since the CMA has been raising the issue of fake review trading, Facebook has been repeatedly criticised for not doing enough to clean up its platforms, plural.
Today the regulator said the social media giant has made further changes to the systems it uses for “identifying, removing and preventing the trading of fake and/or misleading reviews on its platforms to ensure it is fulfilling its previous commitments”.
It’s not clear why it’s taken Facebook well over a year — and a number of high profile interventions — to dial up action against the trade in fake reviews. But the company suggested that the resources it has available to tackle the problem had been strained as a result of the COVID-19 pandemic and associated impacts, such as home working. (Facebook’s full year revenue increased in 2020 but so too did its expenses.)
According to the CMA changes Facebook has made to its system for combating traders of fake reviews include:
- suspending or banning users who are repeatedly creating Facebook groups and Instagram profiles that promote, encourage or facilitate fake and misleading reviews
- introducing new automated processes that will improve the detection and removal of this content
- making it harder for people to use Facebook’s search tools to find fake and misleading review groups and profiles on Facebook and Instagram
- putting in place dedicated processes to make sure that these changes continue to work effectively and stop the problems from reappearing
Again it’s not clear why Facebook would not have already been suspending or banning repeat offenders — at least, not if it was actually taking good faith action to genuinely quash the problem, rather than seeing if it could get away with doing the bare minimum.
Commenting in a statement, Andrea Coscelli, chief executive of the CMA, essentially makes that point, saying: “Facebook has a duty to do all it can to stop the trading of such content on its platforms. After we intervened again, the company made significant changes — but it is disappointing it has taken them over a year to fix these issues.”
“We will continue to keep a close eye on Facebook, including its Instagram business. Should we find it is failing to honour its commitments, we will not hesitate to take further action,” Coscelli added.
A quick search on Facebook’s platform for UK groups trading in fake reviews appears to return fewer obviously dubious results than when we’ve checked in on this problem in 2019 and 2020. Although the results that were returned included a number of private groups so it was not immediately possible to verify what content is being solicited from members.
We did also find a number of Facebook groups offering Amazon reviews intended for other European markets, such as France and Spain (and in one public group aimed at Amazon Spain we found someone offering a “fee” via PayPal for a review; see below screengrab) — suggesting Facebook isn’t applying the same level of attention to tackling fake reviews that are being traded by users in markets where it’s faced fewer regulatory pokes than it has in the UK.
Software makers can’t catch every bug every time, but Facebook had ample warning about the privacy problems with its “contact import” feature.
Feed: All Latest
As COVID-19 vaccines are becoming more readily available to larger groups of the U.S. population, Facebook has teamed up with the U.S. Department of Health and Human Services (HHS) and Centers for Disease Control and Prevention (CDC) to launch new Facebook profile frames that allow users to share their support for getting vaccinated with their family and friends. The effort follows a similar launch in the U.K. through a partnership with National Health Services (NHS), which has already resulted in a quarter of Facebook users in the U.K. having seen a Facebook friend with the profile frame.
At launch, users in the U.S. can pick between frames which include banners that say either “Let’s Get Vaccinated” or “I Got My COVID-19 Vaccine” in English or Spanish. The banner will appear overlaid on the edge of their profile picture next to a blue bubble that reads “We Can Do This.”
Although there were already a variety of vaccine-promoting profile frames to choose from on Facebook, these were all third-party efforts until now. The new frames were created, in part, by Facebook, which will allow the company to better track their usage over time.
In the weeks ahead, Facebook says it will show people a summary in their News Feed of all your friends, family members and people you follow who are using the new COVID-19 vaccine profile frames. For that reason, adopting the first-party frames will be important, if you want to be a part of that list that’s shown to others.
Facebook notes that it’s launching the frames because research shows how social norms can have a major impact on people’s attitude and behavior when it comes to their health — a notable assertion, given that the company wants to otherwise downplay the power its network has when it comes to the spread of disinformation or anti-vax sentiments.
For this effort, Facebook believes, and the research supports, that when people see others who they know and trust getting the vaccine, they’ll be encouraged to do the same. This can be particularly effective when it comes to encouraging those who were otherwise unsure about getting the vaccine.
Leveraging social media to encourage vaccinations has been part of the CDC’s toolkit as well, which is why you likely saw several photos from healthcare workers and essentials workers sharing their vaccination photos and talking about their experience. The CDC had also provided sets of sample social media graphics and messages that could be used by organizations that wanted to promote vaccinations across Facebook, Twitter and LinkedIn.
The new profile frames are rolling out starting today to Facebook users in the U.S.
The Complete Checklist for Managing Campaigns on Facebook That Will Solve More Than 90% of Your Problems
Using this checklist will solve over 90% of the common problems in Facebook campaigns and optimize them for success.
Read more at PPCHero.com
Facebook develops a new way to interact with AR, Uber’s facial recognition policy faces scrutiny and SpaceX’s Starship rocket booster hits a major milestone. This is your Daily Crunch for March 19, 2021.
The big story: Facebook shows off wrist-based interface
This project comes out of Facebook Reality Labs and is supposed to present an alternative computer interface on your wrist, with electromyography sensors to interpret motor nerve signals.
In a blog post, Facebook said a wrist-based device “could reasonably fit into everyday life and social contexts,” while allowing the company to “bring the rich control capabilities of your hands into AR, enabling intuitive, powerful and satisfying interaction.”
Facebook identifies this as a research prototype, so don’t expect it to turn into a commercial product anytime soon. But it’s still suggestive, particularly given the company’s sometimes-surprising hardware strategy and rumors that it might be working on an Apple Watch competitor.
The tech giants
India asks court to block WhatsApp’s policy update, says new change violates laws — The Indian government alleged on Friday that WhatsApp’s planned privacy update violates local laws on several counts.
Uber under pressure over facial recognition checks for drivers — Uber’s use of facial recognition technology for a driver identity system is being challenged in the U.K.
Instagram and WhatsApp hit by outage — The outage began around 1:40 p.m. ET and lasted for more than half an hour.
Startups, funding and venture capital
SpaceX nears final assembly of its first massive testing rocket booster for Starship — SpaceX has completed what’s known as the “stacking” of its first Super Heavy prototype.
Brazilian startup Tractian gets the Y Combinator seal of approval for its equipment monitoring tech — Throughout their lives, the founders had heard their parents complain about the sorry state of maintenance and heavy equipment in their factories.
Superpedestrian positions itself as the go-to partner for cities with new e-scooter safety upgrades — Superpedestrian is considered an up-and-coming player in the micromobility world because of how it handles safety issues.
Advice and analysis from Extra Crunch
It’s time to abandon business intelligence tools — Organizations spend ungodly amounts of money on business intelligence tools, but adoption rates are still below 30%.
The lightning-fast Series A (that was 3 years in the making) — Sounding Board’s Christine Tao discusses raising her Series A on the How I Raised It podcast.
Survey: Share feedback on Extra Crunch — Tell us what you think about Extra Crunch!
(Extra Crunch is our membership program, which helps founders and startup teams get ahead. You can sign up here.)
Cloud infrastructure spending passed on-prem data centers in 2020 — That’s according to new research from Sydney Research Group.
Five trends in the boardrooms of high-growth private companies — Just as countless aspects of corporate life have been reshaped over the course of the last year, boards of directors are undergoing significant and lasting transformation.
Attend Disrupt 2021 for less than $ 100 — If three jam-packed days of TechCrunch Disrupt 2021 wasn’t enough to get your startup motor running, listen up.
The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.
- Once VMware is free from Dell, who might fancy buying it?
- Facebook faces ‘mass action’ lawsuit in Europe over 2019 breach
- Chinese hardware makers turn to crowdfunding as they look to go global
- Core Web Vitals & Preparing for Google’s Page Experience Update
- Conversion modeling through Consent Mode in Google Ads