Monthly Archives: September 2020
- Given that consumers run billions of searches every day — with Google estimated to process 40,000 per second it’s clear marketers need a smart strategy to cut through the competition.
- The question is: Will they drive the highest traffic and performance with SEO or PPC?
- Head of Paid Media at Tug shares insight on how perfectly balancing these two facets can lead to success in the new normal.
Consumer activity online is at an all-time high. So, it’s no surprise many marketers are aiming to make the most of it by hooking their attention early, at the point of search. But deciding how best to do so isn’t necessarily easy.
Given that consumers run billions of searches every day — with Google estimated to process 40,000 per second — it’s clear marketers need a smart strategy to cut through the competition. The question is: will they drive the highest traffic and performance with search engine optimization (SEO) or pay per click (PPC)?
Both have their own advantages and drawbacks. PPC is a quick win, enabling businesses to rapidly reach consumers and boost visibility. But its lead generation power only lasts while the money flows and, depending on campaign scale and scope, those costs can run high. Meanwhile, SEO delivers more lasting rewards and higher click-through rates (CTRs), often for less investment. Yet marketers might have a long wait before organic searches pay off, and may still fall behind dominant digital marketplaces for certain keywords.
Ultimately, the smartest route lies neither one way nor the other, but in a combination of both. Blending PPC and SEO not only generates stronger results but also balances out their respective shortcomings, offering marketers the best chance of success in the new ever-changing normal.
Utilizing a combination of paid and organic search tactics isn’t new – but it’s never been clear how marketers can best do this, or a way to visualize the data for optimization. Leveraging PPC and SEO in conjunction with one another can be challenging, but creating the perfect synergy is possible if marketers focus on the following three factors:
Unify search operations
With consumers spending a quarter of their waking day online, marketers have plenty of chances to spark their interest through search. To outmanoeuvre rivals and capture eyeballs first, brands must make fast yet informed decisions about which approach will produce the ideal outcome.
Achieving this requires holistic insight which, in turn, calls for greater unity. Due to the general view of PPC and SEO as separate entities, teams often operate in silos, but this isolates valuable knowledge around consumer behaviour and the tactics that generate the biggest rewards. Simple as it sounds, removing divisions and encouraging teams to share their insight can significantly improve campaign execution and drive more efficient CPAs.
For example, information from the PPC teams on the best performing keywords and ad copy will help SEO teams to optimize meta descriptions and website content.
Sharing information on what keywords campaigns are covering will also prevent the doubling up of efforts – for example, as organic keyword positions improve, there might be an opportunity to pull back PPC activity and reallocate budget to other keywords to increase the overall coverage. Similarly, updates from the SEO team on keywords that are particularly competitive to rank in top positions are an opportunity for PPC to drive incremental conversions. And, on a more fundamental level, by sharing any new or emerging search terms with each other, both SEO and PPC teams can ensure they are up-to-date and reacting as quickly as possible to opportunities.
Select tech that drives collaboration
The next step is integrated technology. Implementing tools that collate and merge data from multiple sources — including PPC and SEO campaigns — will make collaboration easier. That’s not to mention generating a complete overview of collective search operations, performance, and opportunities for businesses.
A holistic and unified dashboard, for example, can provide visibility of combined search performance against KPIs and competitor activity. This enables PPC and SEO teams to identify where there are opportunities and how strategies can be adjusted to leverage them, without duplicating each other’s efforts. Marketers can understand where organic rankings are high, and competitor activity low, and vice versa, which means they know when to reduce PPC activity, as well as opportunities where it can drive incremental conversions over and above what SEO can deliver.
All of this, however, depends on accuracy and usability. Information needs to be reliable and actionable, which means simply joining up the data dots isn’t enough: in addition to robust cleansing, processing and storage, tools must offer accessible visualization.
Although frequently overlooked, clearly-presented data plays a huge part in enhancing everyday activity. Providing a streamlined picture of keywords and performance data is vital, but to ensure teams can pinpoint prime SERPs, accelerate traffic, and increase conversions, businesses also need tools that allow their teams to quickly find and activate key insights.
Don’t forget human checks
Dialing up tech use, however, does come with a word of warning – no matter how smart platforms may be, they can’t entirely replace human experience and expertise. On their own, sophisticated tools bring a range of benefits that go far beyond translating data into a more cohesive and user-friendly format. The most advanced boast immediate alerts that tell PPC teams where their competitors are bidding — or not — and use artificially intelligent (AI) analysis to deliver a cross-market, sector, and classification perspective on SEO activity.
Human knowledge is still paramount to steering search campaigns in the right direction and picking up on the nuances that machines miss. For instance, problem-solving machines might take the quickest path to objective completion for certain pages or messages, but seasoned search professionals may see the potential for longer-term uses that deliver higher incremental value.
As a result, organizations must avoid the perils of over-reliance on their marketing tools. By persistently applying manual reviews and checking automated conclusions against human knowledge, they can tap the best of tech and people power.
Today’s marketing leaders are grappling with multiple uncertainties, but when it comes to search, the way forward is clear. PPC and SEO are complementary forces; producing deeper insights and higher returns together, as well as minimizing risk. By connecting the two and taking a considered approach to data-driven search strategy, businesses can ensure campaigns are strong enough to succeed in the new normal and take on whatever tomorrow brings.
Asher Gordon is Head of Paid Media at Tug. He leads a multi-disciplined media team who plan, buy, and deliver integrated media plans for a diverse set of clients. With over 10 years experience working across multiple markets and brands at PHD and Wavemaker, Asher works with clients to better their marketing goals and drive their business forward.
The post Synergized search is key to success in the new normal appeared first on Search Engine Watch.
Adobe has had a developer program for years called Adobe.io, but today at the Adobe Developers Live virtual conference, the company announced some new tools with a fresh emphasis on helping developers build custom apps on the Adobe Experience Cloud.
Jason Woosley, VP of developer experience and commerce at Adobe, says that the pandemic has forced companies to build enhanced digital experiences much more quickly than they might have, and the new tools being announced today are at least partly related to helping speed up the development of better online experiences.
“Our focus is very specifically on making the experience-generation business something that’s very attractive to developers and very accessible to developers so we’re announcing a number of tools,” Woosley told TechCrunch.
The idea is to build a more complete framework over time to make it easier to build applications and connect to data sources that take advantage of the Experience Cloud tooling. For starters, Project Firefly is designed to help developers build applications more quickly by providing a higher level of automation than was previously available.
“Project Firefly creates an extensibility framework that reduces the boilerplate that a developer would need to get started working with the Experience Cloud, and extends that into the customizations that we know every implementation eventually needs to differentiate the storefront experience, the website experience or whatever customer touch point as these things become increasingly digital,” he said.
In order to make those new experiences open to all, the company is also announcing React Spectrum, an open source set of libraries and tools designed to help members of the Adobe developer community build more accessible applications and websites.
“It comes with all of the accessibility features that often get forgotten when you’re in a race to market, so it’s nice to make sure that you will be very inclusive with your design, making sure that you’re bringing on all aspects of your audiences,” Woosley said.
Finally, a big part of interacting with Experience Cloud is taking advantage of all of the data that’s available to help build those more customized interactions with customers that having that data enables. To that end, the company is announcing some new web and mobile software development kits (SDKs) designed to help make it simpler to link to Experience Cloud data sources as you build your applications.
Project Firefly is available in developer preview starting today. Several React Spectrum components and some data connection SDKs are also available today. The company intends to keep adding to these various pieces in the coming months.
Research papers come out far too rapidly for anyone to read them all, especially in the field of machine learning, which now affects (and produces papers in) practically every industry and company. This column aims to collect the most relevant recent discoveries and papers — particularly in but not limited to artificial intelligence — and explain why they matter.
The topics in this week’s Deep Science column are a real grab bag that range from planetary science to whale tracking. There are also some interesting insights from tracking how social media is used and some work that attempts to shift computer vision systems closer to human perception (good luck with that).
ML model detects arthritis early
One of machine learning’s most reliable use cases is training a model on a target pattern, say a particular shape or radio signal, and setting it loose on a huge body of noisy data to find possible hits that humans might struggle to perceive. This has proven useful in the medical field, where early indications of serious conditions can be spotted with enough confidence to recommend further testing.
This arthritis detection model looks at X-rays, same as doctors who do that kind of work. But by the time it’s visible to human perception, the damage is already done. A long-running project tracking thousands of people for seven years made for a great training set, making the nearly imperceptible early signs of osteoarthritis visible to the AI model, which predicted it with 78% accuracy three years out.
The bad news is that knowing early doesn’t necessarily mean it can be avoided, as there’s no effective treatment. But that knowledge can be put to other uses — for example, much more effective testing of potential treatments. “Instead of recruiting 10,000 people and following them for 10 years, we can just enroll 50 people who we know are going to be getting osteoarthritis … Then we can give them the experimental drug and see whether it stops the disease from developing,” said co-author Kenneth Urish. The study appeared in PNAS.
Using acoustic monitoring to preemptively save the whales
It’s amazing to think that ships still collide with and kill large whales on a regular basis, but it’s true. Voluntary speed reductions haven’t been much help, but a smart, multisource system called Whale Safe is being put in play in the Santa Barbara channel that could hopefully give everyone a better idea of where the creatures are in real-time.
The system uses underwater acoustic monitoring, near-real-time forecasting of likely feeding areas, actual sightings and a dash of machine learning (to identify whale calls quickly) to produce a prediction for whale presence along a given course. Large container ships can then make small adjustments well-ahead of time instead of trying to avoid a pod at the last minute.
“Predictive models like this give us a clue for what lies ahead, much like a daily weather forecast,” said Briana Abrahms, who led the effort from the University of Washington. “We’re harnessing the best and most current data to understand what habitats whales use in the ocean, and therefore where whales are most likely to be as their habitats shift on a daily basis.”
Incidentally, Salesforce founder Marc Benioff and his wife Lynne helped establish the UC Santa Barbara center that made this possible.
Calculating Featured Snippet Answer Scores
An update this week to a patent tells us how Google may score featured snippet answers.
When a search engine ranks search results in response to a query, it may use a combination of query dependant and query independent ranking signals to determine those rankings.
A query dependant signal may depend on a term in a query, and how relevant a search result may be for that query term. A query independent signal would depend on something other than the terms in a query, such as the quality and quantity of links pointing to a result.
Answers to questions in queries may be ranked based on a combination of query dependant and query independent signals, which could determine a featured snippet answer score. An updated patent about textual answer passages tells us about how those may be combined to generate featured snippet answer scores to choose from answers to questions that appear in queries.
A year and a half ago, I wrote about answers to featured snippets in the post Does Google Use Schema to Write Answer Passages for Featured Snippets?. The patent that the post was about was Candidate answer passages, which was originally filed on August 12, 2015, and was granted as a continuation patent on January 15, 2019.
That patent was a continuation patent to an original one about answer passages that updated it by telling us that Google would look for textual answers to questions that had structured data near them that included related facts. This could have been something like a data table or possibly even schema markup. This meant that Google could provide a text-based answer to a question and include many related facts for that answer.
Another continuation version of the first version of the patent was just granted this week. It provides more information and a different approach to ranking answers for featured snippets and it is worth comparing the claims in these two versions of the patent to see how those are different from Google.
The new version of the featured snippet answer scores patent is at:
Scoring candidate answer passages
Inventors: Steven D. Baker, Srinivasan Venkatachary, Robert Andrew Brennan, Per Bjornsson, Yi Liu, Hadar Shemtov, Massimiliano Ciaramita, and Ioannis Tsochantaridis
Assignee: Google LLC
US Patent: 10,783,156
Granted: September 22, 2020
Filed: February 22, 2018
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for scoring candidate answer passages. In one aspect, a method includes receiving a query determined to be a question query that seeks an answer response and data identifying resources determined to be responsive to the query; for a subset of the resources: receiving candidate answer passages; determining, for each candidate answer passage, a query term match score that is a measure of similarity of the query terms to the candidate answer passage; determining, for each candidate answer passage, an answer term match score that is a measure of similarity of answer terms to the candidate answer passage; determining, for each candidate answer passage, a query dependent score based on the query term match score and the answer term match score; and generating an answer score that is a based on the query dependent score.
Candidate Answer Passages Claims Updated
There are changes to the patent that require more analysis of potential answers, based on both query dependant and query independent scores for potential answers to questions. The patent description does provide details about query dependant and query independent scores. The first claim from the first patent covers query dependant scores for answers, but not query independent scores as the newest version does. It provides more details about both query dependant scores and query independent scores in the rest of the claims, but the newer version seems to make both query dependant and query independent scores more important.
The first claim from the 2015 version of the Scoring Answer Passages patent tells us:
1. A method performed by data processing apparatus, the method comprising: receiving a query determined to be a question query that seeks an answer response and data identifying resources determined to be responsive to the query and ordered according to a ranking, the query having query terms; for each resource in a top-ranked subset of the resources: receiving candidate answer passages, each candidate answer passage selected from passage units from content of the resource and being eligible to be provided as an answer passage with search results that identify the resources determined to be responsive to the query and being separate and distinct from the search results; determining, for each candidate answer passage, a query term match score that is a measure of similarity of the query terms to the candidate answer passage; determining, for each candidate answer passage, an answer term match score that is a measure of similarity of answer terms to the candidate answer passage; determining, for each candidate answer passage, a query dependent score based on the query term match score and the answer term match score; and generating an answer score that is a measure of answer quality for the answer response for the candidate answer passage based on the query dependent score.
The remainder of the claims tell us about both query dependant and query independent scores for answers, but the claims from the newer version of the patent appear to place as much importance on the query dependant and the query independent scores for answers. That convinced me that I should revisit this patent in a post and describe how Google may calculate answer scores based on query dependant and query independent scores.
The first claims in the new patent tell us:
1. A method performed by data processing apparatus, the method comprising: receiving a query determined to be a question query that seeks an answer response and data identifying resources determined to be responsive to the query and ordered according to a ranking, the query having query terms; for each resource in a top-ranked subset of the resources: receiving candidate answer passages, each candidate answer passage selected from passage units from content of the resource and being eligible to be provided as an answer passage with search results that identify the resources determined to be responsive to the query and being separate and distinct from the search results; determining, for each candidate answer passage, a query dependent score that is proportional to a number of instances of matches of query terms to terms of the candidate answer passage; determining, for each candidate answer passage, a query independent score for the candidate answer passage, wherein the query independent score is independent of the query and query dependent score and based on features of the candidate answer passage; and generating an answer score that is a measure of answer quality for the answer response for the candidate answer passage based on the query dependent score and the query independent score.
As it says in this new claim, the answer score has gone from being “a measure of answer quality for the answer response for the candidate answer passage based on the query dependent score” (from the first patent) to “a measure of answer quality for the answer response for the candidate answer passage based on the query dependent score and the query independent score” (from this newer version of the patent.)
This drawing is from both versions of the patent, but it shows the query dependant and query independent scores both playing an important role in calculating featured snippet answer scores:
Query Dependant and Query Independent Scores for Featured Snippet Answer Scores
Both versions of the patent tell us about how a query dependant score and a query independent score for an answer might be calculated. The first version of the patent only told us in its claims that an answer score used the query dependant score, and this newer version tells us that both the query dependant and the query independent scores are combined to calculate an answer score (to decide which answer is the best choice of an answer for a query.)
Before the patent discusses how Query Dependant and Query Independent signals might be used to create an answer score, it does tell us this about the answer score:
The answer passage scorer receives candidate answer passages from the answer passage generator and scores each passage by combining scoring signals that predict how likely the passage is to answer the question.
In some implementations, the answer passage scorer includes a query dependent scorer and a query independent scorer that respectively generate a query dependent score and a query independent score. In some implementations, the query dependent scorer generates the query dependent score based on an answer term match score and a query term match score.
Query Dependant Scoring for Featured Snippet Answer Scores
Query Dependent Scoring of answer passages is based on answer term features.
An answer term match score is a measure of similarity of answer terms to terms in a candidate answer passage.
The answer-seeking queries do not describe what a searcher is looking for since the answer is unknown to the searcher at the time of a search.
The query dependent scorer begins by finding a set of likely answer terms and compares the set of likely answer terms to a candidate answer passage to generate an answer term match score. The set of likely answer terms is likely taken from the top N ranked results returned for a query.
The process creates a list of terms from terms that are included in the top-ranked subset of results for a query. The patent tells us that each result is parsed and each term is included in a term vector. Stop words may be omitted from the term vector.
For each term in the list of terms, a term weight may be generated for the term. The term weight for each term may be based on many results in the top-ranked subset of results in which the term occurs multiplied by an inverse document frequency (IDF) value for the term. The IDF value may be derived from a large corpus of documents and provided to the query dependent scorer. Or the IDF may be derived from the top N documents in the returned results. The patent tells us that other appropriate term weighting techniques can also be used.
The scoring process for each term of the candidate answer passage determines several times the term occurs in the candidate answer passage. So, if the term “apogee” occurs two times in a candidate answer passage, the term value for “apogee” for that candidate answer passage is 2. However, if the same term occurs three times in a different candidate answer passage, then the term value for “apogee” for the different candidate answer passage is 3.
The scoring process, for each term of the candidate answer passage, multiplies its term weight by the number of times the term occurs in the answer passage. So, assume the term weight for “apogee” is 0.04. For the first candidate answer passage, the value based on “apogee” is 0.08 (0.08.times.2); for the second candidate answer passage, the value based on “apogee” is 0.12 (0.04.times.3).
Other answer term features can also be used to determine an answer term score. For example, the query dependent scorer may determine an entity type for an answer response to the question query. The entity type may be determined by identifying terms that identify entities, such as persons, places, or things, and selecting the terms with the highest term scores. The entity time may also be identified from the query (e.g., for the query [who is the fastest man]), the entity type for an answer is “man.” For each candidate answer passage, the query dependent scorer then identifies entities described in the candidate answer passage. If the entities do not include a match to the identified entity type, the answer term match score for the candidate answer passage is reduced.
Assume the following candidate passage answer is provided for scoring in response to the query [who is the fastest man]: Olympic sprinters have often set world records for sprinting events during the Olympics. The most popular sprinting event is the 100-meter dash.
The query dependent scorer will identify several entities–Olympics, sprinters, etc.–but none of them are of the type “man.” The term “sprinter” is gender-neutral. Accordingly, the answer term score will be reduced. The score may be a binary score, e.g., 1 for the presence of the term of the entity type, and 0 for an absence of the term of the correct type; alternatively may be a likelihood that is a measure of the likelihood that the correct term is in the candidate answer passage. An appropriate scoring technique can be used to generate the score.
Query Independant Scoring for Featured Snippet Answer Scores
Scoring answer passages according to query independent features.
Candidate answer passages may be generated from the top N ranked resources identified for a search in response to a query. N may be the same number as the number of search results returned on the first page of search results.
The scoring process can use a passage unit position score. This passage unit position could be the location of a result that a candidate answer passage comes from. The higher the location results in a higher score.
The scoring process may use a language model score. The language model score generates a score based on candidate answer passages conforming to a language model.
One type of language model is based on sentence and grammar structures. This could mean that candidate answer passages with partial sentences may have lower scores than candidate answer passages with complete sentences. The patent also tells us that if structured content is included in the candidate answer passage, the structured content is not subject to language model scoring. For instance, a row from a table may have a very low language model score but may be very informative.
Another language model that may be used considers whether text from a candidate answer passage appears similar to answer text in general.
A query independent scorer accesses a language model of historical answer passages, where the historical answer passages are answer passages that have been served for all queries. Answer passages that have been served generally have a similar n-gram structure, since answer passages tend to include explanatory and declarative statements. A query independent score could use a tri-gram model to compares trigrams of the candidate answer passage to the tri-grams of the historical answer passages. A higher-quality candidate answer passage will typically have more tri-gram matches to the historical answer passages than a lower quality candidate answer passage.
Another step involves a section boundary score. A candidate answer passage could be penalized if it includes text that passes formatting boundaries, such as paragraphs and section breaks, for example.
The scoring process determines an interrogative score. The query independent scorer searches the candidate answer passage for interrogative terms. A potential answer passage that includes a question or question term, e.g., “How far is away is the moon from the Earth?” is generally not as helpful to a searcher looking for an answer as a candidate answer passage that only includes declarative statements, e.g., “The moon is approximately 238,900 miles from the Earth.”
The scoring process also determines discourse boundary term position scores. A discourse boundary term is one that introduces a statement or idea contrary to or modification of a statement or idea that has just been made. For example, “conversely,” “however,” “on the other hand,” and so on.
A candidate answer passage beginning with such a term receives a relatively low discourse boundary term position score, which lowers the answer score.
A candidate answer passage that includes but does not begin with such a term receives a higher discourse boundary term position score than it would if it began with the term.
A candidate answer passage that does not include such a term receives a high discourse boundary term position score.
The scoring process determines result scores for results from which the candidate answer passage was created. These could include a ranking score, a reputation score, and site quality score. The higher these scores are, the higher the answer score will be.
A ranking score is based on the ranking score of the result from which the candidate answer passage was created. It can be the search score of the result for the query and will be applied to all candidate answer passages from that result.
A reputation score of the result indicates the trustworthiness and/or likelihood that that subject matter of the resource serves the query well.
A site quality score indicates a measure of the quality of a web site that hosts the result from which the candidate answer passage was created.
Component query independent scores described above may be combined in several ways to determine the query independent score. They could be summed; multiplied together; or combined in other ways.
Copyright © 2020 SEO by the Sea ⚓. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact SEO by the Sea, so we can take appropriate action immediately.
Plugin by Taragana
“All computers are completely shut down,” one Universal Health Services employee told WIRED.
Feed: All Latest
- STEPPS model singles out social currency, triggers, emotional value, public popularity, practical value, and background stories as common points between viral content.
- People’s tendency to mimic popular behavior and be a part of a group activity results in viral trends.
- The internet likes content that is raw and genuine.
- Evelyn Johnson details the elements of viral content and how brands can achieve it.
George Miller, famously known as Joji is one of the fastest rising musicians. He’s the first Asian-born artist to top R&B and hip-hop charts, he achieved this feat with his first studio album. But Joji is no stranger to fame. Before he was a teenage heartthrob —Joji was garnering millions of views on YouTube as FilthyFrank. This online persona was edgy, outrageous, and oftentimes, outright filthy which helped qualify as viral content.
To top it all off, FilthyFrank was responsible for creating one of the biggest viral trends in the history of the Internet.
The Harlem Shake
Back when TikTok wasn’t a thing and Twitter was just a few years old —viral trends were somewhat rare. It was unusual for people to replicate a popular video and take part in activities just because others were doing it on the internet.
The Harlem Shake changed all this. It’s hard to describe this trend so I will let Wikipedia do its job.
“The videos usually last about 30 seconds and feature an excerpt of the 2012 song “Harlem Shake” by American EDM producer Baauer. Baauer’s song starts with a 15-second intro, a bass drop, then 15 seconds with the bass, and a lion roar at the end of the first 30 seconds. Usually, a video begins with one person (often helmeted or masked) dancing to the song alone for 15 seconds, surrounded by other people not paying attention or seemingly unaware of the dancing individual. When the bass drops, the video cuts to the entire group dancing for the rest of the video”.
To explain it simply, a small 30-second clip of Joji dancing in his Pink Guy persona along with his friends in outlandish costumes caught fire, and soon, everyone wanted in on the act. Even the Norwegian Army joined in the fun.
Now, does this make any sense? Of course not.
The internet can be pretty senseless at times (or maybe most of the time). It’s hard to determine what exactly will capture the imagination of the masses.
But this hasn’t stopped social scientists from analyzing virality and what causes people to participate in a trend.
What exactly makes something go viral?
Let’s say you are a marketing manager at an ecommerce store trying to create a viral promo code. You may look at popular trends and get bedazzled by their randomness. Should you gather a group of individuals in random costumes and make them dance? Maybe it will turn out to be the next Harlem Shake?
Hopefully, it wouldn’t come to that as many academics and marketing professionals have tried to make sense of internet virality. Among them, Jonah Berger STEPPS Model and Robert Cialdini’s Principle of Social Proof and a study by Cambridge University gives an important insight into the psychology behind viral videos.
1. Jonah Berger’s STEPPS Model
When it comes to understanding virality, Wharton marketing professor Jonah Berger is a familiar name. In his book Contagious: Why Things Catch On, Berger attributes six main reasons for the success of videos that go viral. He defines it as the STEPPS model:
- Social Currency: People share content that makes them seem smart and cool. While most of us would like to believe otherwise, we do care what others think of us.
- Triggers: Relates to the idea of “top of mind, tip of tongue”. People talk about things that are in their minds.
- Emotion: When people care, they share. Individuals are likely to share content they found emotionally arousing.
- Public: Anything that’s already popular is more likely to be shared further. People imitate other people.
- Practical Value: Useful stuff is also widely shared. Informational content has great value and hence “How to” videos often go viral.
- Stories: People are always interested in compelling narratives and stories. Anything that connects with the audience has a chance of being shared.
It’s hard not to trace at least some of these attributes in content that goes viral.
Old Town Road
Lil Nas X’s Old Town Road became a hit because it had a certain social currency. It was a cool genre-bending song about country life that people loved sharing. Since the song was released around a time when a game with similar themes —Red Dead Redemption, was extremely popular, it triggered a massive response.
People were sharing the memes of Old Town Road long before it became number one at Billboard 100. This means the song had a public aspect as well from the STEPPS model.
Admittedly, the song provided no real practical value. But it had an intriguing story: the song was removed from the country chart as critics believed it did not belong to the genre. This prompted country legend, Billy Ray Cyrus, to come out in the song’s support and collaborate with the singer.
2. Robert Cialdini’s principle of social proof
In his book, ‘Influence — The Psychology of Persuasion’, Robert Ciadini introduces the concept of social proof. In his own words:
“The greater the number of people who find any idea correct, the more the idea will be correct…We will use the actions of others to decide on proper behavior for ourselves, especially when we view those others as similar to ourselves”.
It’s also known as herd mentality or groupthink, where we assume an action is appropriate because others are performing it.
Consider this: How many times have you retweeted a tweet that already had thousands of retweets?
“When a social media post reaches a certain level of popularity —it is likely to get shares from individuals who are more influenced by the number of likes than the content itself,”
says Julia Markle, the head of digital content at ClothingRIC.
“This generates a chain reaction that transforms a popular post into a viral post.”
3. Viral altruism
Virality on the internet is most often random and spontaneous. But sometimes it is by design. In a paper published by the University of Cambridge, the famous ALS Ice Bucket Challenge was dubbed as “viral altruism”.
Social psychologist Dr. Sander van der Linden described some key psychological factors that cause such trends to catch fire. Among these is the power of social norms, in particular the attraction of joining a social consensus and the aspiration to conform to prosocial behavior, a clear moral incentive to act, and the need for a “warm glow”: the emotional benefit that’s gained from feeling compassionate.
In simple words, people act:
- To participate in a popular activity
- To gain the emotional reward of giving to others
- When they believe it’s morally the right thing to do
ALS Ice Bucket Challenge
The ALS Ice Bucket Challenge appealed to people’s good side and this made it widely successful. Its virality resulted in $ 220 million being raised for battling the disease. However, not all similar challenges that go viral have a charitable aspect.
For instance, the “Don’t Rush Challenge” wasn’t for a greater cause but became popular as it allowed people to come together and build a connection by mimicking a popular behavior.
What makes the Ice Bucket Challenge special is that it demonstrated how internet virality can be harnessed and used to inspire action. Of course, this is easier said than done for brands.
How can brands create viral content?
The internet is filled with listicles that guide brands in crafting viral content. But having well-designed infographics, solid backlinks, and hiring influencers don’t always drive a flood of visitors to your website.
It’s important that along with covering the basics, you keep one important factor into consideration; authenticity. From the Hit or Miss video to Old Town Road and ALS Ice Bucket Challenge —all these popular trends were raw and genuine.
Considering this, brands should create content that fosters human connection. Almost all individuals who’ve studied virality believe “being involved in something big” motivates people to share content and participate in challenges. So whether you hit or miss, there’s no harm in spreading your word through genuine content that’s designed to go viral.
But don’t make the mistake of creating something that’s a blatant attempt to cash in on the internet culture. Millennials and Gen Z in particular will be especially hostile towards any such effort.
Evelyn Johnson is a full-time cat lady and a part-time blogger. She writes about digital marketing, technology, social issues, and pretty much anything that’s in the now. Evelyn can be found on Twitter @EvelynJohns0n.
The post Creating viral content: Is it possible to harness viral trends? appeared first on Search Engine Watch.
The highest valued company in Bessemer’s annual cloud report has defied convention by staying private
This year’s Bessemer Venture Partners’ annual Cloud 100 Benchmark report was published recently and my colleague Alex Wilhelm looked at some broad trends in the report, but digging into the data, I decided to concentrate on the Top 10 companies by valuation. I found that the top company has defied convention for a couple of reasons.
Bessemer looks at private companies. Once they go public, they lose interest, and that’s why certain startups go in and out of this list each year. As an example, Dropbox was the most highly valued company by far with a valuation in the $ 10 billion range for 2016 and 2017, the earliest data in the report. It went public in 2018 and therefore disappeared.
While that $ 10 billion benchmark remains a fairly good measure of a solidly valued cloud company, one company in particular blew away the field in terms of valuation, an outlier so huge, its value dwarfs even the mighty Snowflake, which was valued at over $ 12 billion before it went public earlier this month.
That company is Stripe, which has an other-worldly valuation of $ 36 billion. Stripe began its ascent to the top of the charts in 2016 and 2017 when it sat behind Dropbox with a $ 6 billion valuation in 2016 and around $ 8 billion in 2017. By the time Dropbox left the chart in 2018, Stripe would have likely blown past it when its valuation soared to $ 20 billion. It zipped up to around $ 23 billion last year before taking another enormous leap to $ 36 billion this year.
Stripe remains an outlier not only for its enormous valuation, but also the fact that it hasn’t gone public yet. As TechCrunch’s Ingrid Lunden pointed out in an article earlier this year, the company has remained quiet about its intentions, although there has been some speculation lately that an IPO could be coming.
What Stripe has done to earn that crazy valuation is to be the cloud payment API of choice for some of the largest companies on the internet. Consider that Stripe’s customers include Amazon, Salesforce, Google and Shopify and it’s not hard to see why this company is valued as highly as it is.
Stripe came up with the idea of making it simple to incorporate a payments mechanism into your app or website, something that’s extremely time-consuming to do. Instead of building their own, developers tapped into Stripe’s ready-made variety and Stripe gets a little money every time someone bangs on the payment gateway.
When you’re talking about some of the biggest companies in the world being involved, and many others large and small, all of those payments running through Stripe’s systems add up to a hefty amount of revenue, and that revenue has led to this amazing valuation.
One other company you might want to pay attention to here is UIPath, the robotic process automation company, which was sitting just behind Snowflake with a valuation of over $ 10 billion. While it’s unclear if RPA, the technology that helps automate legacy workflows, will have the lasting power of a payments API, it certainly has come on strong the last couple of years.
Most of the companies in this report appear for a couple of years as they become unicorns, watch their values soar and eventually go public. Stripe up to this point has chosen not to do that, making it a highly unusual company.
Last month, Facebook introduced support for paid online events — and because many of the businesses offering those events have struggled during the coronavirus pandemic, the company also said it would not collect fees for the next year. At the same time, it complained that Apple had “dismissed” its requests to waive the App Store’s customary 30% fee on in-app purchases.
Today, Facebook is announcing a reversal on Apple’s part: Online event fees will be processed through Facebook Pay, without Apple collecting its 30% cut, meaning businesses will receive all of the earnings from their online events, minus taxes. This arrangement will last until December 31 and will not apply to gaming creators.
The news comes after Facebook publicly pressured Apple to change its stance. It even submitted an iOS app update stating that “Apple takes 30% of this purchase” in the events payments flow. (Facebook said Apple rejected the update for including information that’s “irrelevant” to users.)
And while the two companies appear to have come to an agreement, today’s statements from Facebook are still a bit barbed.
“This is a difficult time for small businesses and creators, which is why we are not collecting any fees from paid online events while communities remain closed for the pandemic,” said Facebook spokesperson Joe Osborne. “Apple has agreed to provide a brief, three-month respite after which struggling businesses will have to, yet again, pay Apple the full 30% App Store tax.”
Similarly, in discussing the exception for gaming creators, Facebook Gaming Vice President Vivek Sharma said, “We unfortunately had to make this concession to get the temporary reprieve for other businesses.”
When asked about the change, Apple provided the following statement: “The App Store provides a great business opportunity for all developers, who use it to reach half a billion visitors each week across 175 countries. To ensure every developer can create and grow a successful business, Apple maintains a clear, consistent set of guidelines that apply equally to everyone.”
More specifically, Apple said it’s giving Facebook until the end of the year to implement in-app payments for these events and bring them into compliance with App Store rules.
This also comes as Fortnite-maker Epic Games is waging a legal battle and publicity campaign against Apple’s App Store fees, with Fortnite removed from the iOS App Store. Epic is also part of a just-announced group of publishers called the Coalition for App Fairness, which is pushing for app store changes or regulation.
- Once VMware is free from Dell, who might fancy buying it?
- Facebook faces ‘mass action’ lawsuit in Europe over 2019 breach
- Chinese hardware makers turn to crowdfunding as they look to go global
- Core Web Vitals & Preparing for Google’s Page Experience Update
- Conversion modeling through Consent Mode in Google Ads