American Online Phonebook

Monthly Archives: October 2020

Five fears of channel owners: What spooks you about creating your own CTV app?

October 31, 2020 No Comments

30-second summary:

  • Recent data from Roku shows that 85% of Americans are now streamers. Making them feel excited about some new CTV app is not a piece of cake but also not totally unfeasible.
  • In the dark, dark woods of AdTech, Connected TV (CTV) apps are a dime a dozen. This may sound spooky enough for a proper Halloween horror story.
  • The market is currently dealing with many potentially brilliant content creators having cold feet when thinking of launching their own CTV channels.
  • Alex Zakrevsky, CEO of Allroll, helps you overcome these fears.

In the dark, dark woods of AdTech, Connected TV (CTV) apps are a dime a dozen. This may sound spooky enough for a proper Halloween horror story. In reality, the impressive growth of the CTV market strengthened the competition and endowed it with many “survival of the fittest” features. As a result, the fact that the number of connected TV devices in the US reached 400 million this year, as per Leichtman Research Group, is not that appealing and comforting for channel owners anymore. The market is currently dealing with many potentially brilliant content creators having cold feet when thinking of launching their own CTV channels. To overcome these fears, it’s important to embrace them first.

1. Failing to start

There is a belief that developing a channel from scratch requires either proficient coding skills or paying a fortune to those who have such skills. So, instead of starting, let’s say a Roku channel, content producers tend to be terrified of the prospect of coding or not being able to make ends meet. To lower the level of anxiety, it’s always useful to look at available options. 

If watching someone building a channel for you is the most preferable model, specialized agencies are the best choice to make. These companies usually have their own in-house developing teams and charge a set price or a revenue share, which gives room for maneuver. Alternatively, there are freelance developers whose price tag normally starts from $ 25/hour on Upwork.

The downside of both solutions is that they will depend on developers’ availability and may eventually turn out to be slow-moving and quite pricey. Yet, they will definitely help have less on one’s plate. At the same time, there are ways of developing a CTV app without going bankrupt or going full-on with programming languages.

CTV Roku

In addition to custom channel development, some CTV platforms, such as Roku or Amazon Fire, offer their no-coding solutions for channel owners. Roku, for example, has its on-the-house model called Direct Publisher. Yes, this tool limits customization, monetization, and third party analytics options, but it does save time, money, and, more importantly, keeps channel owners with no coding experience sane. As a compromise between basic and advanced features, there’s a moderately-priced service for developing Roku channels that is cloud-based and code-free. Instant TV Channel costs $ 45.95/month. It creates and maintains a video feed as well as offers a range of customization opportunities. Consequently, if coding isn’t a channel owner’s strong suit, it’s needless to pay millions or spend months trying to make sense of programming. What’s crucial is the idea that drives a publisher and the content that will drive potential viewers. 

2. Being mediocre

As CTV ad spend is surging and has already increased by 19% this year, based on IAB’s figures, more and more publishers are getting on board each day. This makes creating original content pretty challenging. Ultimately, channel owners are surrounded, on the one side, by fears of meeting their channel-doppelganger and, on the other side, being ‘eaten alive’ by channels-giants, like Netflix, Animal Planet, and others. Sounds quite dramatic, doesn’t it? If someone is still wondering whether there’s any space left for new apps in the CTV universe, it’s worth checking on how many people delightfully watch channels, which others would not even think of, in the screensavers or special interest sections on the Roku platform. 

As for the chances of becoming a copycat of your own concept, great minds do think alike but most of the time not so literally. Therefore, becoming a successful channel owner calls for out-of-the-box thinking, doing some research, and being generally both strategic and brave.

3. Having zero installs

Recent data from Roku shows that 85% of Americans are now streamers. Making them feel excited about some new CTV app is not a piece of cake but also not totally unfeasible. So, if there is a genuine fear that no one will ever install a new Roku channel, here are several promotional techniques for not letting this happen.  

First of all, it’s essential to make as many people as possible aware of a new channel via a website, emails, and social media. This is absolutely free, a bit time-consuming but worthwhile. Secondly, it’s important to attend online/offline events and accept all networking opportunities where a channel owner can meet potential viewers and introduce a channel to them. Then, it’s good to think of collaborating with like-minded channels so as to make friends with indirect competitors and promote each others’ content.

Additionally, it would be beneficial to be included in one of those guides with top new channels one should install. For this purpose and in general, getting feedback on the content from influencers can be really game-changing. Finally, in case there’s a request to level up the current promotional approach, it’s time to consider monetization. 

Roku has its self-serve platform for growing publishers’ audiences using the tailored display and video ads. While its CPM rates can range significantly with no guaranteed number of installs, the platform is quite flexible in terms of budgets and can meet various needs and wants. What’s more, there’s the Allroll marketing platform aimed to drive viewers to Roku channels by the means of advanced targeting options and personalized advertising messages. It provides higher apps’ exposure and, ultimately, + 60% installs with the same budgets as those required for the native platform. So, there’s definitely a lot one can do to enhance the channel’s results without getting overwhelmed.

4. Surrendering to YouTube

When talking about video channels, there is always an elephant in the room. This elephant’s name is of course YouTube. Some publishers are still skeptical about CTV platforms, thinking their videos will never perform there as well as they do on good old YouTube. They might as well imagine having to stick to one platform to have windfall gains. In fact, there’s much more to this than meets the eye.

No matter how successful, YouTube is just a service. At least for a content owner and not an employee of YouTube. Thus, there is no need to choose between different stages on which to play the content. On the contrary, it is better to use as many platforms as one can manage to reach out to as many viewers as possible. This is the smart way of promoting video content, raising brand awareness, and maximizing profit in the soaring digital space. 

5. Getting lost in streaming obscurity

It’s not particularly a secret that the world of streaming is currently run by four major operating systems: Roku, Amazon Fire, Android TV/Google TV, or Apple TV. The first two have the biggest share of 100.2 million (Roku) and 72.7 million (Amazon Fire) users, according to eMarketer. The rest of the players are of somewhat a lower caliber. Picking one platform for an app may seem like a tough job, bearing in mind their characteristics resemble each other in so many ways. For instance, Roku uses Audience Network with broad geolocation options for targeting and a revenue share model for monetization within its Direct Publisher mode. In the meantime, Amazon Fire’s code-free Amazon Creator offers extensive data on consumers’ preferences collected from Amazon devices and a commission-based monetization. This may rightly seem quite confusing.

The reasonable tactic for not getting puzzled by the best bets is to follow the audience. People mainly prefer streaming platforms that relate to an operating system they are plugged into in their everyday lives. So, if they have an Amazon Prime account that they actively use or they are fond of Alexa, these consumers are likely to go for Amazon Firesticks in their streaming experience.

Similarly, Apple products’ adepts will favor Apple TV, whereas Android users will stand for Android TV. Roku is sort of a black sheep in this family, as it has always been solely TV-oriented. Though, it’s extremely user-friendly, very affordable and its devices were voted the best of this crowd on numerous occasions. Without beating about the bush, knowing your audience is the key. 

Takeaways

The CTV market has been on the rise offering publishers more advanced opportunities to reach their viewers. Meanwhile, the stakes of being bog-standard or outdated got higher, as the competition became more severe. This left some content producers panicked about their chances to succeed instead of being focused on bringing new creative ideas to life. After all, living in fear is counterproductive. Hence, the best method of facing fears is to meet them in person. The launch of a new CTV app will consist of a series of important rendezvous on each of the steps: a platform or platforms to use, development strategy, content ideas, promotional tools, and monetization models. It’s vital to pay attention to every single decision throughout this journey. Now, time to get down to business.

Alex Zakrevsky is the CEO of Allroll marketing platform for CTV/OTT channel owners. Innovator, product lover, CTV, and programmatic enthusiast. He believes that the quality of the product always wins.

The post Five fears of channel owners: What spooks you about creating your own CTV app? appeared first on Search Engine Watch.

Search Engine Watch


Cloud infrastructure revenue grows 33% this quarter to almost $33B

October 31, 2020 No Comments

The cloud infrastructure market kept growing at a brisk pace last quarter, as the pandemic continued to push more companies to the cloud with offices shut down in much of the world. This week the big three — Amazon, Microsoft and Google — all reported their numbers and, as expected, the news was good, with Synergy Research reporting revenue growth of 33% year over year, up to almost $ 33 billion for the quarter.

Still, John Dinsdale, chief analyst at Synergy, was a bit taken aback that the market continued to grow as much as it did. “While we were fully expecting continued strong growth in the market, the scale of the growth in Q3 was a little surprising,” he said in a statement.

He added, “Total revenues were up by $ 2.5 billion from the previous quarter causing the year-on-year growth rate to nudge upwards, which is unusual for such a large market. It is quite clear that COVID-19 has provided an added boost to a market that was already developing rapidly.”

Per usual Amazon led the way with $ 11.6 billion in revenue, up from $ 10.8 billion last quarter. That’s up 29% year over year. Amazon continues to exhibit slowing growth in the cloud market, but because of its market share lead of 33%, a rate that has held fairly steady for some time, the growth is less important than the eye-popping revenue it continues to generate, almost double its closest rival Microsoft .

Speaking of Microsoft, Azure revenue was up 48% year over year, also slowing some, but good enough for a strong second place with 18% market share. Using Synergy’s total quarterly number of $ 33 billion, Microsoft came in at $ 5.9 billion in revenue for the quarter, up from $ 5.2 billion last quarter.

Finally, Google announced cloud revenue of $ 3.4 billion, but that number includes all of its cloud revenue including G Suite and other software. Synergy reported that this was good for 9%, or $ 2.98 billion, up from $ 2.7 billion last quarter, good for third place.

Alibaba and IBM were tied for fourth with 5%, or around $ 1.65 billion each.

Synergy Research cloud infrastructure relative market positions. Amazon is the largest circle followed by Microsoft.

Image Credits: Synergy Research

It’s worth noting that Canalys had similar numbers to Synergy, with growth of 33% to $ 36.5 billion. They had the same market order with slightly different numbers, with Amazon at 32%, Microsoft at 19% and Google at 7%, and Alibaba in 4th place at 6%.

Canalys sees continued growth ahead, especially as hybrid cloud begins to merge with newer technologies like 5G and edge computing. “All three [providers] are collaborating with mobile operators to deploy their cloud stacks at the edge in the operators’ data centers. These are part of holistic initiatives to profit from 5G services among business customers, as well as transform the mobile operators’ IT infrastructure,” Canalys analyst Blake Murray said in a statement.

While the pure growth continues to move steadily downward over time, this is expected in a market that’s maturing like cloud infrastructure, but as companies continue to shift workloads more rapidly to the cloud during the pandemic, and find new use cases like 5G and edge computing, the market could continue to generate substantial revenue well into the future.


Enterprise – TechCrunch


TikTok stars got a judge to block Trump’s TikTok ban

October 31, 2020 No Comments

TikTok has won another battle in its fight against the Trump administration’s ban of its video-sharing app in the U.S. — or, more accurately in this case, the TikTok community won a battle. On Friday, a federal judge in Pennsylvania issued an injunction that blocked the restrictions that would have otherwise blocked TikTok from operating in the U.S. on November 12.

This particular lawsuit was not led by TikTok itself, but rather a group of TikTok creators who use the app to engage with their million-plus followers.

According to the court documents, plaintiff Douglas Marland has 2.7 million followers on the app; Alec Chambers has 1.8 million followers; and Cosette Rinab has 2.3 million followers. The creators argued — successfully as it turns out — that they would lose access to their followers in the event of a ban, as well as the “professional opportunities afforded by TikTok.” In other words, they’d lose their brand sponsorships — meaning, their income.

This is not the first time that the U.S. courts have sided with TikTok to block the Trump administration’s proposed ban over the Chinese-owned video sharing app. Last month, a D.C. judge blocked the ban that would have removed the app from being listed in U.S. app stores run by Apple and Google.

That ruling had not, however, stopped the November 12 ban that would have blocked companies from providing internet hosting services that would have allowed TikTok to continue to operate in the U.S.

The Trump administration had moved to block the TikTok app from operating in the U.S. due to its Chinese parent company, ByteDance, claiming it was a national security threat. The core argument from the judge in this ruling was the “Government’s own descriptions of the national security threat posed by the TikTok app are phrased in the hypothetical.”

That hypothetical risk was unable to be stated by the government, the judge argued, to be such a risk that it outweighed the public interest. The interest, in this case, was the more than 100 million users of TikTok and the creators like Marland, Chambers and Rinab that utilized it to spread “informational materials,” which allowed the judge to rule that the ban would shut down a platform for expressive activity.

“We are deeply moved by the outpouring of support from our creators, who have worked to protect their rights to expression, their careers, and to help small businesses, particularly during the pandemic,” said Vanessa Pappas, Interim Global Head of TikTok, in a statement. “We stand behind our community as they share their voices, and we are committed to continuing to provide a home for them to do so,” she added.

The TikTok community coming to the rescue on this one aspect of the overall TikTok picture just elevates this whole story. Though the company has been relatively quiet through this whole process, Pappas has thanked the community several times for its outpouring of support. Though there were some initial waves of “grief” on the app with creators frantically recommending people follow them on other platforms, that has morphed over time into more of a “let’s band together” vibe. This activity coalesced around a big swell in voting advocacy on the platform, where many creators are too young to actually participate but view voting messaging as their way to participate.

TikTok has remained active in the product department through the whole mess, shipping elections guides and trying to ban QAnon conspiracy spread, even as Pakistan banned and then un-banned the app.

 

 

 


Social – TechCrunch


Adjusting Featured Snippet Answers by Context

October 31, 2020 No Comments

How Are Featured Snippet Answers Decided Upon?

I recently wrote about Featured Snippet Answer Scores Ranking Signals. In that post, I described how Google was likely using query dependent and query independent ranking signals to create answer scores for queries that were looking like they wanted answers.

One of the inventors of that patent from that post was Steven Baker. I looked at other patents that he had written, and noticed that one of those was about context as part of query independent ranking signals for answers.

Remembering that patent about question-answering and context, I felt it was worth reviewing that patent and writing about it.

This patent is about processing question queries that want textual answers and how those answers may be decided upon.

it is a complicated patent, and at one point the description behind it seems to get a bit murky, but I wrote about when that happened in the patent, and I think the other details provide a lot of insight into how Google is scoring featured snippet answers. There is an additional related patent that I will be following up with after this post, and I will link to it from here as well.

This patent starts by telling us that a search system can identify resources in response to queries submitted by users and provide information about the resources in a manner that is useful to the users.

How Context Scoring Adjustments for Featured Snippet Answers Works

Users of search systems are often searching for an answer to a specific question, rather than a listing of resources, like in this drawing from the patent, showing featured snippet answers:

featured snippet answers

For example, users may want to know what the weather is in a particular location, a current quote for a stock, the capital of a state, etc.

When queries that are in the form of a question are received, some search engines may perform specialized search operations in response to the question format of the query.

For example, some search engines may provide information responsive to such queries in the form of an “answer,” such as information provided in the form of a “one box” to a question, which is often a featured snippet answer.

Some question queries are better served by explanatory answers, which are also referred to as “long answers” or “answer passages.”

For example, for the question query [why is the sky blue], an answer explaining light as waves is helpful.

featured snippet answers - why is the sky blue

Such answer passages can be selected from resources that include text, such as paragraphs, that are relevant to the question and the answer.

Sections of the text are scored, and the section with the best score is selected as an answer.

In general, the patent tells us about one aspect of what it covers in the following process:

  • Receiving a query that is a question query seeking an answer response
  • Receiving candidate answer passages, each passage made of text selected from a text section subordinate to a heading on a resource, with a corresponding answer score
  • Determining a hierarchy of headings on a page, with two or more heading levels hierarchically arranged in parent-child relationships, where each heading level has one or more headings, a subheading of a respective heading is a child heading in a parent-child relationship and the respective heading is a parent heading in that relationship, and the heading hierarchy includes a root level corresponding to a root heading (for each candidate answer passage)
  • Determining a heading vector describing a path in the hierarchy of headings from the root heading to the respective heading to which the candidate answer passage is subordinate, determining a context score based, at least in part, on the heading vector, adjusting the answer score of the candidate answer passage at least in part by the context score to form an adjusted answer score
  • Selecting an answer passage from the candidate answer passages based on the adjusted answer scores

Advantages of the process in the patent

  1. Long query answers can be selected, based partially on context signals indicating answers relevant to a question
  2. The context signals may be, in part, query-independent (i.e., scored independently of their relatedness to terms of the query
  3. This part of the scoring process considers the context of the document (“resource”) in which the answer text is located, accounting for relevancy signals that may not otherwise be accounted for during query-dependent scoring
  4. Following this approach, long answers that are more likely to satisfy a searcher’s informational need are more likely to appear as answers

This patent can be found at:

Context scoring adjustments for answer passages
Inventors: Nitin Gupta, Srinivasan Venkatachary , Lingkun Chu, and Steven D. Baker
US Patent: 9,959,315
Granted: May 1, 2018
Appl. No.: 14/169,960
Filed: January 31, 2014

Abstract

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for context scoring adjustments for candidate answer passages.

In one aspect, a method includes scoring candidate answer passages. For each candidate answer passage, the system determines a heading vector that describes a path in the heading hierarchy from the root heading to the respective heading to which the candidate answer passage is subordinate; determines a context score based, at least in part, on the heading vector; and adjusts answer score of the candidate answer passage at least in part by the context score to form an adjusted answer score.

The system then selects an answer passage from the candidate answer passages based on the adjusted answer scores.

Using Context Scores to Adjust Answer Scores for Featured Snippets

A drawing from the patent shows different hierarchical headings that may be used to determine the context of answer passages that may be used to adjust answer scores for featured snippets:

Hierarchical headings for featured snippets

I discuss these headings and their hierarchy below. Note that the headings include the Page title as a heading (About the Moon), and the headings within heading elements on the page as well. And those headings give those answers context.

This context scoring process starts with receiving candidate answer passages and a score for each of the passages.

Those candidate answer passages and their respective scores are provided to a search engine that receives a query determined to be a question.

Each of those candidate answer passages is text selected from a text section under a particular heading from a specific resource (page) that has a certain answer score.

For each resource where a candidate answer passage has been selected, a context scoring process determines a heading hierarchy in the resource.

A heading is text or other data corresponding to a particular passage in the resource.

As an example, a heading can be text summarizing a section of text that immediately follows the heading (the heading describes what the text is about that follows it, or is contained within it.)

Headings may be indicated, for example, by specific formatting data, such as heading elements using HTML.

This next section from the patent reminded me of an observation that Cindy Krum of Mobile Moxie has about named anchors on a page, and how Google might index those to answer a question, to lead to an answer or a featured snippet. She wrote about those in What the Heck are Fraggles?

A heading could also be anchor text for an internal link (within the same page) that links to an anchor and corresponding text at some other position on the page.

A heading hierarchy could have two or more heading levels that are hierarchically arranged in parent-child relationships.

The first level, or the root heading, could be the title of the resource.

Each of the heading levels may have one or more headings, and a subheading of a respective heading is a child heading and the respective heading is a parent heading in the parent-child relationship.

For each candidate passage, a context scoring process may determine a context score based, at least in part, on the relationship between the root heading and the respective heading to which the candidate answer passage is subordinate.

The context scoring process could be used to determine the context score and determines a heading vector that describes a path in the heading hierarchy from the root heading to the respective heading.

The context score could be based, at least in part, on the heading vector.

The context scoring process can then adjust the answer score of the candidate answer passage at least in part by the context score to form an adjusted answer score.

The context scoring process can then select an answer passage from the candidate answer passages based on adjusted answer scores.

This flowchart from the patent shows the context scoring adjustment process:

context scoring adjustment flowchart

Identifying Question Queries And Answer Passages

I’ve written about understanding the context of answer passages. The patent tells us more about question queries and answer passages worth going over in more detail.

Some queries are in the form of a question or an implicit question.

For example, the query [distance of the earth from the moon] is in the form of an implicit question “What is the distance of the earth from the moon?”

An implicit question - the distance from the earth to the moon

Likewise, a question may be specific, as in the query [How far away is the moon].

The search engine includes a query question processor that uses processes that determine if a query is a query question (implicit or specific) and if it is, whether there are answers that are responsive to the question.

The query question processor can use several different algorithms to determine whether a query is a question and whether there are particular answers responsive to the question.

For example, it may use to determine question queries and answers:

  • Language models
  • Machine learned processes
  • Knowledge graphs
  • Grammars
  • Combinations of those

The query question processor may choose candidate answer passages in addition to or instead of answer facts. For example, for the query [how far away is the moon], an answer fact is 238,900 miles. And the search engine may just show that factual information since that is the average distance of the Earth from the moon.

But, the query question processor may choose to identify passages that are to be very relevant to the question query.

These passages are called candidate answer passages.

The answer passages are scored, and one passage is selected based on these scores and provided in response to the query.

An answer passage may be scored, and that score may be adjusted based on a context, which is the point behind this patent.

Often Google will identify several candidate answer passages that could be used as featured snippet answers.

Google may look at the information on the pages where those answers come from to better understand the context of the answers such as the title of the page, and the headings about the content that the answer was found within.

Contextual Scoring Adjustments for Featured Snippet Answers

The query question processor sends to a context scoring processor some candidate answer passages, information about the resources from which each answer passages was from, and a score for each of the featured snippet answers.

The scores of the candidate answer passages could be based on the following considerations:

  • Matching a query term to the text of the candidate answer passage
  • Matching answer terms to the text of the candidate answer passages
  • The quality of the underlying resource from which the candidate answer passage was selected

I recently wrote about featured snippet answer scores, and how a combination of query dependent and query independent scoring signals might be used to generate answer scores for answer passages.

The patent tells us that the query question processor may also take into account other factors when scoring candidate answer passages.

Candidate answer passages can be selected from the text of a particular section of the resource. And the query question processor could choose more than one candidate answer passage from a text section.

We are given the following examples of different answer passages from the same page

(These example answer passages are referred to in a few places in the remainder of the post.)

  • (1) It takes about 27 days (27 days, 7 hours, 43 minutes, and 11.6 seconds) for the Moon to orbit the Earth at its orbital distance
  • (2) Why is the distance changing? The moon’s distance from Earth varies because the moon travels in a slightly elliptical orbit. Thus, the moon’s distance from the Earth varies from 225,700 miles to 252,000 miles
  • (3) The moon’s distance from Earth varies because the moon travels in a slightly elliptical orbit. Thus, the moon’s distance from the Earth varies from 225,700 miles to 252,000 miles

Each of those answers could be good ones for Google to use. We are told that:

More than three candidate answers can be selected from the resource, and more than one resource can be processed for candidate answers.

How would Google choose between those three possible answers?

Google might decide based on the number of sentences and a selection of up to a maximum number of characters.

The patent tells us this about choosing between those answers:

Each candidate answer has a corresponding score. For this example, assume that candidate answer passage (2) has the highest score, followed by candidate answer passage (3), and then by candidate answer passage (1). Thus, without the context scoring processor, candidate answer passage (2) would have been provided in the answer box of FIG. 2. However, the context scoring processor takes into account the context of the answer passages and adjusts the scores provided by the query question processor.

So, we see that what might be chosen based on featured snippet answer scores could be adjusted based on the context of that answer from the page that it appears on.

Contextually Scoring Featured Snippet Answers

This process starts which begins with a query determined to be a question query seeking an answer response.

This process next receives candidate answer passages, each candidate answer passage chosen from the text of a resource.

Each of the candidate answer passages are text chosen from a text section that is subordinate to a respective heading (under a heading) in the resource and has a corresponding answer score.

For example, the query question processor provides the candidate answer passages, and their corresponding scores, to the context scoring processor.

A Heading Hierarchy to Determine Context

This process then determines a heading hierarchy from the resource.

The heading hierarchy would have two or more heading levels hierarchically arranged in parent-child relationships (Such as a page title, and an HTML heading element.)

Each heading level has one or more headings.

A subheading of a respective heading is a child heading (an (h2) heading might be a subheading of a (title)) in the parent-child relationship and the respective heading is a parent heading in the relationship.

The heading hierarchy includes a root level corresponding to a root heading.

The context scoring processor can process heading tags in a DOM tree to determine a heading hierarchy.

hierarchical headings for featured snippets

For example, concerning the drawing about the distance to the moon just above, the heading hierarchy for the resource may be:

The ROOT Heading (title) is: About The Moon (310)

The main heading (H1) on the page

H1: The Moon’s Orbit (330)

A secondary heading (h2) on the page:

H2: How long does it take for the Moon to orbit Earth? (334)

Another secondary heading (h2) on the page is:

H2: The distance from the Earth to the Moon (338)

Another Main heading (h1) on the page

H1: The Moon (360)

Another secondary Heading (h2) on the page:

H2: Age of the Moon (364)

Another secondary heading (h2) on the page:

H2: Life on the Moon (368)

Here is how the patent describes this heading hierarchy:

In this heading hierarchy, The title is the root heading at the root level; headings 330 and 360 are child headings of the heading, and are at a first level below the root level; headings 334 and 338 are child headings of the heading 330, and are at a second level that is one level below the first level, and two levels below the root level; and headings 364 and 368 are child headings of the heading 360 and are at a second level that is one level below the first level, and two levels below the root level.

The process from the patent determines a context score based, at least in part, on the relationship between the root heading and the respective heading to which the candidate answer passage is subordinate.

This score may be is based on a heading vector.

The patent says that the process, for each of the candidate answer passages, determines a heading vector that describes a path in the heading hierarchy from the root heading to the respective heading.

The heading vector would include the text of the headings for the candidate answer passage.

For the example candidate answer passages (1)-(3) above about how long it takes the moon to orbit the earch, the respectively corresponding heading vectors V1, V2 and V3 are:

  • V1=<[Root: About The Moon], [H1: The Moon's Orbit], [H2: How long does it take for the Moon to orbit the Earth?]>
  • V2=<[Root: About The Moon], [H1: The Moon's Orbit], [H2: The distance from the Earth to the Moon]>
  • V3=<[Root: About The Moon], [H1: The Moon's Orbit], [H2: The distance from the Earth to the Moon]>

We are also told that because candidate answer passages (2) and (3) are selected from the same text section 340, their respective heading vectors V2 and V3 are the same (they are both in the content under the same (H2) heading.)

The process of adjusting a score, for each answer passage, uses a context score based, at least in part, on the heading vector (410).

That context score can be a single score used to scale the candidate answer passage score or can be a series of discrete scores/boosts that can be used to adjust the score of the candidate answer passage.

Where things Get Murky in This Patent

There do seem to be several related patents involving featured snippet answers, and this one which targets learning more about answers from their context based on where they fit in a heading hierarchy makes sense.

But, I’m confused by how the patent tells us that one answer based on the context would be adjusted over another one.

The first issue I have is that the answers they are comparing in the same contextual area have some overlap. Here those two are:

  • (2) Why is the distance changing? The moon’s distance from Earth varies because the moon travels in a slightly elliptical orbit. Thus, the moon’s distance from the Earth varies from 225,700 miles to 252,000 miles
  • (3) The moon’s distance from Earth varies because the moon travels in a slightly elliptical orbit. Thus, the moon’s distance from the Earth varies from 225,700 miles to 252,000 miles

Note that the second answer and the third answer both include the same line: “Thus, the moon’s distance from the Earth varies from 225,700 miles to 252,000 miles.” I find myself a little surprised that the second answer includes a couple of sentences that aren’t in the third answer, and skips a couple of lines from the third answer, and then includes the last sentence, which answers the question.

Since they both appear in the same heading and subheading section of the page they are from, it is difficult to imagine that there is a different adjustment based on context. But, the patent tells us differently:

The candidate answer score with the highest adjusted answer score (based on context from the headings) is selected, and the answer passage.

Recall that in the example above, the candidate answer passage (2) had the highest score, followed by candidate answer passage (3), and then by candidate answer passage (1).

However, after adjustments, candidate answer passage (3) has the highest score, followed by candidate answer passage (2), and then-candidate answer passage (1).

Accordingly, candidate answer passage (3) is selected and provided as the answer passage of FIG. 2.

Boosting Scores Based on Passage Coverage Ratio

A query question processor may limit the candidate answers to a maximum length.

The context scoring processor determines a coverage ratio which is a measure indicative of the coverage of the candidate answer passage from the text from which it was selected.

The patent describes alternative question answers:

Alternatively, the text block may include text sections subordinate to respective headings that include a first heading for which the text section from which the candidate answer passage was selected is subordinate, and sibling headings that have an immediate parent heading in common with the first heading. For example, for the candidate answer passage, the text block may include all the text in the portion 380 of the hierarchy; or may include only the text of the sections, of some other portion of text within the portion of the hierarchy. A similar block may be used for the portion of the hierarchy for candidate answer passages selected from that portion.

A small coverage ratio may indicate a candidate answer passage is incomplete. A high coverage ratio may indicate the candidate answer passage captures more of the content of the text passage from which it was selected. A candidate answer passage may receive a context adjustment, depending on this coverage ratio.

A passage coverage ratio is a ratio of the total number of characters in the candidate answer passage to the ratio of the total number of characters in the passage from which the candidate answer passage was selected.

The passage cover ratio could also be a ratio of the total number of sentences (or words) in the candidate answer passage to the ratio of the total number of sentences (or words) in the passage from which the candidate answer passage was selected.

We are told that other ratios can also be used.

From the three example candidate answer passages about the distance to the moon above (1)-(3) above, passage (1) has the highest ratio, passage (2) has the second-highest, and passage (3) has the lowest.

This process determines whether the coverage ratio is less than a threshold value. That threshold value can be, for example, 0.3, 0.35 or 0.4, or some other fraction. In our “distance to the moon” example, each coverage passage ratio meets or exceeds the threshold value.

If the coverage ratio is less than a threshold value, then the process would select a first answer boost factor. The first answer boost factor might be proportional to the coverage ratio according to a first relation, or maybe a fixed value, or maybe a non-boosting value (e.g., 1.0.)

But if the coverage ratio is not less than the threshold value, the process may select a second answer boost factor. The second answer boost factor may be proportional to the coverage ratio according to a second relation, or maybe fixed value, or maybe a value greater than the non-boosting value (e.g., 1.1.)

Scoring Based on Other Features

The context scoring process can also check for the presence of features in addition to those described above.

Three example features for contextually scoring an answer passage can be based on the additional features of the distinctive text, a preceding question, and a list format.

Distinctive text

Distinctive text is the text that may stand out because it is formatted differently than other text, like using bolding.

A Preceeding Question

A preceding question is a question in the text that precedes the candidate answer question.

The search engine may process various amounts of text to detect for the question.

Only the passage from which the candidate answer passage is extracted is detected.

A text window that can include header text and other text from other sections may be checked.

A boost score that is inversely proportional to the text distance from a question to the candidate answer passage is calculated, and the check is terminated at the occurrence of a first question.

That text distance may be measured in characters, words, or sentences, or by some other metric.

If the question is anchor text for a section of text and there is intervening text, such as in the case of a navigation list, then the question is determined to only precede the text passage to which it links, not precede intervening text.

In the drawing above about the moon, there are two questions in the resource: “How long does it take for the Moon to orbit Earth?” and “Why is the distance changing?”

The first question–“How long does it take for the Moon to orbit Earth?”– precedes the first candidate answer passage by a text distance of zero sentences, and it precedes the second candidate answer passage by a text distance of five sentences.

And the second question–“Why is the distance changing?”– precedes the third candidate answer by zero sentences.

If a preceding question is detected, then the process selects a question boost factor.

This boost factor may be proportional to the text distance, whether the text is in a text passage subordinate to a header or whether the question is a header, and, if the question is in a header, whether the candidate answer passage is subordinate to the header.

Considering these factors, the third candidate answer passage receives the highest boost factor, the first candidate answer receives the second-highest boost factor, and the second candidate answer receives the smallest boost factor.

Conversely, if the preceding text is not detected, or after the question boost factor is detected, then the process detects for the presence of a list.

The Presence of a List

A list is an indication of several steps usually instructive or informative. The detection of a list may be subject to the query question being a step modal query.

A step modal query is a query where a list-based answer is likely to a good answer. Examples of step model queries are queries like:

  • [How to . . . ]
  • [How do I . . . ]
  • [How to install a door knob]
  • [How do I change a tire]

The context scoring process may detect lists formed with:

  • HTML tags
  • Micro formats
  • Semantic meaning
  • Consecutive headings at the same level with the same or similar phrases (e.g., Step 1, Step 2; or First; Second; Third; etc.)

The context scoring process may also score a list for quality.

It would look at things such as:

  • A list in the center of a page, which does not include multiple links to other pages (indicative of reference lists)
  • HREF link text that does not occupy a large portion of the text of the list will be of higher quality than a list at the side of a page, and which does include multiple links to other pages (which are indicative of reference lists), and/are has HREF link text that does occupy a large portion of the text of the list

If a list is detected, then the process selects a list boost factor.

That list boost factor may be fixed or may be proportional to the quality score of the list.

If a list is not detected, or after the list boost factor is selected, the process ends.

In some implementations, the list boost factor may also be dependent on other feature scores.

If other features, such as coverage ratio, distinctive text, etc., have relatively high scores, then the list boot factor may be increased.

The patent tells us that this is because “the combination of these scores in the presence of a list is a strong signal of a high-quality answer passage.”

Adjustment of Featured Snippet Answers Scores

Answer scores for candidate answer passages are adjusted by scoring components based on heading vectors, passage coverage ratio, and other features described above.

The scoring process can select the largest boost value from those determined above or can select a combination of the boost values.

Once the answer scores are adjusted, the candidate answer passage with the highest adjusted answer score is selected as the featured snippet answer and is displayed to a searcher.

More to Come

I will be reviewing the first patent in this series of patents about candidate answer scores because it does have some additional elements to it that haven’t been covered in this post, and the post about query dependent/independent ranking signals for answer scores. If you have been paying attention to how Google has been answering queries that appear to be seeking answers, you have likely seen those improving in many cases. Some answers have been really bad though. It will be nice to have as complete an idea as we can of how Google decides what might be a good answer to a query, based on information available to them on the Web.

Added October 14, 2020 – I have written about another Google patent on Answer Scores, and it’s worth reading about all of the patents on this topic. The new post is at Weighted Answer Terms for Scoring Answer Passages, and is about the patent Weighted answer terms for scoring answer passages.

It is about identifying questions in resources, and answers for those questions, and describes using term weights as a way to score answer passages (along with the scoring approaches identified in the other related patents, including this one.)

Added October 15, 2020 – I have written a few other posts about answer passages that are worth reading if you are interested in how Google finds questions on pages and answers to those, and scores answer passages to determine which ones to show as featured snippets. I’ve linked to some of those in the body of this post, but here is another one of those posts:

Added October 22, 2020, I have written up a description of details from about how structured and unstructured data has been selected for answer passages based on specific criteria in the patent on Scoring Answer passages in the post Selecting Candidate Answer Passages.


Copyright © 2020 SEO by the Sea ⚓. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact SEO by the Sea, so we can take appropriate action immediately.
Plugin by Taragana

The post Adjusting Featured Snippet Answers by Context appeared first on SEO by the Sea ⚓.


SEO by the Sea ⚓


iPhones can now tell blind users where and how far away people are

October 31, 2020 No Comments

Apple has packed an interesting new accessibility feature into the latest beta of iOS: a system that detects the presence of and distance to people in the view of the iPhone’s camera, so blind users can social distance effectively, among many other things.

The feature emerged from Apple’s ARKit, for which the company developed “people occlusion,” which detects people’s shapes and lets virtual items pass in front of and behind them. The accessibility team realized that this, combined with the accurate distance measurements provided by the lidar units on the iPhone 12 Pro and Pro Max, could be an extremely useful tool for anyone with a visual impairment.

Of course during the pandemic one immediately thinks of the idea of keeping six feet away from other people. But knowing where others are and how far away is a basic visual task that we use all the time to plan where we walk, which line we get in at the store, whether to cross the street and so on.

The new feature, which will be part of the Magnifier app, uses the lidar and wide-angle camera of the Pro and Pro Max, giving feedback to the user in a variety of ways.

The lidar in the iPhone 12 Pro shows up in this infrared video. Each dot reports back the precise distance of what it reflects off of.

First, it tells the user whether there are people in view at all. If someone is there, it will then say how far away the closest person is in feet or meters, updating regularly as they approach or move further away. The sound corresponds in stereo to the direction the person is in the camera’s view.

Second, it allows the user to set tones corresponding to certain distances. For example, if they set the distance at six feet, they’ll hear one tone if a person is more than six feet away, another if they’re inside that range. After all, not everyone wants a constant feed of exact distances if all they care about is staying two paces away.

The third feature, perhaps extra useful for folks who have both visual and hearing impairments, is a haptic pulse that goes faster as a person gets closer.

Last is a visual feature for people who need a little help discerning the world around them, an arrow that points to the detected person on the screen. Blindness is a spectrum, after all, and any number of vision problems could make a person want a bit of help in that regard.

The system requires a decent image on the wide-angle camera, so it won’t work in pitch darkness. And while the restriction of the feature to the high end of the iPhone line reduces the reach somewhat, the constantly increasing utility of such a device as a sort of vision prosthetic likely makes the investment in the hardware more palatable to people who need it.

Here’s how it works so far:

This is far from the first tool like this — many phones and dedicated devices have features for finding objects and people, but it’s not often that it comes baked in as a standard feature.

People detection should be available to iPhone 12 Pro and Pro Max running the iOS 14.2 release candidate that was just made available today. Details will presumably appear soon on Apple’s dedicated iPhone accessibility site.

Mobile – TechCrunch


Digital Marketing in a VUCA World: Is it Time to Emerge from Plato’s Cave?

October 31, 2020 No Comments

Google, Facebook, Amazon, etc. are giving digital marketers what Socrates would call a “superficial truth”. The data sets are incomplete.

Read more at PPCHero.com
PPC Hero


How to immediately profit from your next piece of content

October 30, 2020 No Comments

30-second summary:

  • Most content marketers focus on creating blog posts and writing guest posts to improve SEO rankings. This approach overlooks the value of insightful content as a sales resource, especially for B2B firms.
  • What type of content works best for sales prospecting. Examples of content and an overview of how to create your outreach list.
  • An overview of two campaigns where blog content was used to generate leads for an SEO agency. Included in the overview are email templates and campaign outcomes.
  • How to review and optimize your content marketing outreach campaigns to generate more leads for your business.

Significant business resources are invested in creating content that is never engaged with, writing guest posts that are never read, and sharing content that is never seen. It’s a reality that most of us choose to ignore because we are fixated on inbound marketing.

While inbound marketing is effective, it’s not without problems:

  1. Most of the visitors who engage with your content will never return. It’s generally agreed that somewhere in the region of 2%-6% of first-time visitors return to a website.
  2. You have little control over who visits, and most visitors do not fit the profile of your customer persona (you’ll be doing very well if you convert even 0.3% of site visitors into customers).
  3. There are only so many spaces on the front page of Google. Truthfully, most of us will be fighting and failing to achieve our desired SERP rankings.

Outbound marketing sidesteps two of those three issues.

When you create a list of companies that fit your target demographic and then send emails to the relevant people in that company, you gain a degree of control over who consumes your content. Where you sit in the search rankings will not impact the outcome of your campaign.

While most sales teams use outbound marketing, few companies coordinate their content marketing efforts with outbound sales initiatives.

I believe that this is an oversight. I’ve secured several new customers for my agency in the last three months by coordinating my sales and content marketing efforts.

This guide will share an approach that I believe can help all businesses, but especially small to medium-sized businesses, that operate in the B2B space acquire new customers. It’s a strategy that relies upon creating a small amount of really great content, then actively promoting that content to the right people. Let’s dive in.

1. Consider the goals of your customer

Ideally, your outbound marketing strategy should neatly fit into your long term content marketing goals. For me, an optimal content campaign that aligns with sales should look something like this.

At the start of the campaign, you need to identify relevant keywords to target.

The keywords you pick should align with your ideal customer’s pain points and the solution that you offer either through your product or service. For example, at my company, we help businesses in the SaaS niche secure guest posts on relevant sites. I decided that the initial outreach campaign would be based around my guide on how to guest post.

You can see how the topic aligns with the solution.

If you’re going to run an outreach campaign that utilizes content from your site, you must use informative content that offers value. After all, the article will be the first impression that you leave with a potential customer interacting with your business.

You can create multiple pieces of content around your product or service offering. However, I recommend you start with one piece of cornerstone content.

2. Create a customer outreach list

There is a good chance that you already have a strategy in place to promote new content. Often, that involves creating a list of sites that have linked to a competing piece of content. You then find the contact details of the author and send them a message asking for a link.

A sales outreach campaign based around a piece of content is just as straightforward. However, the goal and who you target is different.

I’ll assume you have a customer persona. You know what type of companies buy your products or services. You need to create a list of suitable companies. You can use resources like Google My Business, the Inc 5,000, and other business roundups to quickly create a list of suitable companies to contact. 

Once you’ve created your shortlist, you need to find the details of the person in charge of purchasing decisions at each company. For an SEO agency, that person typically has a job title like ‘Chief Marketing Officer (CMO)’.

I generally use a combination of LinkedIn and an email finder to get their contact details. Hunter and Voila Norbert both offer 50 free searches, which is enough for an initial campaign.

Pull all of that information you collect into a Google Sheet.

3. Run your outreach campaign

There are numerous types of sales outreach campaigns you can run that incorporate blog content. For example, I collected the details of everyone who left a comment on the Backlinko blog. I removed individuals and companies that didn’t fit my customer persona and sent them all an email.

Below is a screenshot of the email template I used alongside one of the responses.

Piece of content - Profit from it by email outreach - Example 1

You can see this is a soft sell. The only reference to the service I offer is my email signature that links to a sales page. The primary resource in the email was this blog post.

I wanted to start a conversation with prospective customers not generate an instant sale.

This particular outreach campaign, which was sent to around 200 people, generated two leads. In addition, I was asked to appear on a podcast and was offered a couple of guest post opportunities.

You can be more direct. Here is an example from another campaign.

Piece of content - Profit from it by email outreach

We leveraged the credibility of Sumo for this sales campaign. The company has more brand recognition than Launch Space, a site that few people would recognize.

The primary resource used for the Sumo sales campaign was this article. The guest post fits the criteria of a cornerstone piece of content. It’s actionable, insightful, and relevant to the needs of prospective customers.

You might have noticed that I adapted my email signature for the campaign. We generated two leads from our first 100 emails.

4. Review the results

If this is your first campaign, I recommend you send outreach emails to between 100-200 companies. Send your emails, then a week or two later, review the results.

The first campaign we ran had a 1% conversion rate. I sent 100 emails and got one customer.

The math was simple.

I didn’t use any marketing tools for the campaign. You might choose to start the same way.

To improve the results of any marketing campaign, you need to track relevant metrics. There are plenty of affordable email tracking tools that provide insights like email opens, link clicks, and other statistics.

Good email tracking tools will allow you to split test your copy. You’ll also gather information on when people open your email and who opened your message multiple times but didn’t respond. You can use this data to improve your campaign results, for example, by scheduling your emails for the optimal time or day of the week or deciding on who to send multiple emails to.

Wrapping up

In this guide, I outlined how you can include blog posts and guest posts in your cold outreach to generate leads for your business. It’s a strategy that I’ve used to consistently land fresh clients, which has, in turn, helped me grow my business.

If you’re a B2B company selling a product or service with a high-profit margin, outbound marketing will normally provide you with a positive Return On Investment (ROI). It’s logical to utilize blog content as a sales resource, especially if you presume that the content will eventually generate leads through inbound marketing. Most companies don’t do this; I hope this article has provided you with the impetus to try.

Nico Prins is an online marketer and the founder of Launch Space. He helps companies develop their digital marketing strategies. He’s worked with everyone from Fortune 500 companies to startups helping them develop content marketing strategies that align with their business goals. Follow him on Twitter @nhdprins.

The post How to immediately profit from your next piece of content appeared first on Search Engine Watch.

Search Engine Watch


Donut launches Watercooler, an easy way to socialize online with co-workers

October 30, 2020 No Comments

If you miss hanging out with your co-workers but don’t want to spend a single second more on Zoom, the latest product from Donut might be the answer.

The startup is launching its new Watercooler product today while also announcing that it has raised $ 12 million in total funding, led by Accel and with participation from Bloomberg Beta, FirstMark, Slack Fund and various angel investors.

Co-founder and CEO Dan Manian told me that this is actually money that the startup raised before the pandemic, across multiple rounds. It just didn’t announce the fundraising until now.

The startup’s vision, Manian said, is “to create human connection between people at work.” Its first product, Intros, connects via Slack teammates who didn’t already know each other, often with the goal of setting up quick coffee meetings (originally in-person and now virtual).

Donut says it has facilitated 4 million connections across 12,000 companies (including The New York Times, Toyota and InVision), with 1 million of those connections made since the beginning of the pandemic.

However, Manian said customers have been asking Donut to facilitate more frequent interactions, especially since most people aren’t going to have these coffee meetings every day. At the same time, people face the dueling issues of isolation and Zoom fatigue, where “the antidote to one thing makes the other pain worse.” And he suggested that one of the hardest things to recreate while so many of us are working remotely are “all the little microinteractions that you have while you’re working.”

That’s where Watercooler comes in — as the name suggests, it’s designed to replicate the feeling of hanging out at the office watercooler, having brief, low-key conversations. Like Intros, it integrates with Slack, creating a new channel where Watercooler will post fun, conversation-starting questions like “‘What’s your favorite form of potato?” or “What’s one thing you’ve learned in your career that you wish you knew sooner?”

Talking about these topics shouldn’t take much time, but Manian argued that brief conversations are important: “Those things add up to friendship over time, they’re what actually transform you from co-worker to friend.” And those friendships are important for employers too, because they help with team cohesion and retention.

I fully endorse the idea of a Slack watercooler — in fact, the TechCrunch editorial team has a very active “watercooler” channel and I’m always happy to waste time there. My big question was: Why do companies need to purchase a product for this?

Donut Watercooler

Donut Watercooler. Image Credits: Donut

Manian said that there were “a bunch of our early adopters” who had tried doing this manually, but it was always in the “past tense”: “It got too hard to come up with the questions, or it took real work coming up with them, whoever was doing it already had a it full time job.”

With Watercooler, on the other hand, the company can choose from pre-selected topics and questions, set the frequency with which those questions are posted and then everything happens automatically.

Manian also noted that different organizations will focus on different types of questions. There are no divisive political questions included, but while some teams will stick to easy questions about things like potatoes and breakfast foods, others will get into more substantive topics like the ways that people prefer to receive feedback.

And yes, Manian thinks companies will still need these tools after the pandemic is over.

“Work has fundamentally changed,” he said. “I don’t think we’ll put remote work back in the bottle. I think it’s here to stay.”

At the same time, he described the past few months as “training wheels” for a hybrid model, where some team members go back to the office while others continue working remotely. In his view, teams will face an even bigger challenge then: To keep their remote members feeling like they’re connected and in-the-loop.

 


Enterprise – TechCrunch


WhatsApp is now delivering roughly 100 billion messages a day

October 30, 2020 No Comments

WhatsApp, the popular instant messaging app owned by Facebook, is now delivering roughly 100 billion messages a day, the company’s chief executive Mark Zuckerberg said at the quarterly earnings call Thursday.

For some perspective, users exchanged 100 billion messages on WhatsApp last New Year’s Eve. That is the day when WhatsApp tops its engagement figures, and as many of you may remember, also the time when the service customarily suffered glitches in the past years. (No outage on last New Year’s Eve!)

At this point, WhatsApp is just competing with itself. Facebook Messenger and WhatsApp together were used to exchange 60 billion messages a day as of early 2016. Apple chief executive Tim Cook said in May that iMessage and FaceTime were seeing record usage, but did not share specific figures. The last time Apple did share the figure, it was far behind WhatsApp’s then usage (podcast). WeChat, which has also amassed over 1 billion users, is behind in daily volume of messages, too.

In early 2014, WhatsApp was being used to exchange about 50 billion texts a day, its then chief executive Jan Koum revealed at an event.

At the time, WhatsApp had fewer than 500 million users. WhatsApp now has more than 2 billion users and at least in India, its largest market by users, its popularity surpasses those of every other smartphone app including the big blue app.

“This year we’ve all relied on messaging more than ever to keep up with our loved ones and get business done,” tweeted Will Cathcart, head of WhatsApp.

Sadly, that’s all the update the company shared on WhatsApp today. Mystery continues for when WhatsApp expects to resume its payments service in Brazil, and when it plans to launch its payments in India, where it began testing the service in 2018. (It has already shared big plans around financial services in India, though.)

“We are proud that WhatsApp is able to deliver roughly 100B messages every day and we’re excited about the road ahead,” said Cathcart.


Social – TechCrunch


In Embryos, Crispr Can Cut Out Whole Chromosomes—That’s Bad

October 30, 2020 No Comments

The DNA-cutting tool has been hailed as a way to fix genetic glitches. But a new study suggests it can remove more than scientists bargained for.
Feed: All Latest


©2018-2020 - American Online Phonebook - All Rights Reserved

Privacy Policy | Terms & Conditions