American Online Phonebook

Monthly Archives: February 2019

Introducing Google Marketing Platform

February 28, 2019 No Comments

The online world was very different when DoubleClick debuted in 1996. Things we take for granted today, like texting emoji, sharing funny YouTube videos, or taking smartphone selfies were all years away.

Of course, changes in technology have meant changes for digital marketers. There’s been an explosion of channels, formats and data. Consumers are also more aware of how they’re being marketed to and how their data is being used—and they want more control.

To address these new realities, marketers need tools that make it easy to get better results from their marketing in a way that puts privacy first.

This is why we’re announcing Google Marketing Platform.

Google Marketing Platform brings together DoubleClick Digital Marketing and the Google Analytics 360 Suite to help you plan, buy, measure and optimize digital media and customer experiences in one place. Google Marketing Platform helps you deliver more relevant and effective marketing, while ensuring that you respect your customers’ privacy and give them control over their data.
In our recent survey of global marketing organizations, we learned that the #1 priority for marketers is to better understand their customers. By offering tools that make it easy to collaborate and share insights, Google Marketing Platform helps achieve this customer-first approach to marketing.

In the U.S., adidas has started working more collaboratively across their digital teams to share insights and get a deeper understanding of their customers. Chris Murphy, Head of Digital Experience, describes their approach:

“Our adidas teams work together in one environment where we can see audience insights, what creative we’re running and where, how it’s performing, and make changes almost in real time.”
Chris Murphy, Head of Digital Experience, adidas

Better results with ads plus analytics

Google Marketing Platform builds on existing integrations between the Google Analytics 360 Suite and DoubleClick advertiser products. Marketers have seen great results when they use ads and analytics technology together. For example, BookIt used Analytics 360 to uncover insights about the types of travelers interested in their brand and used these insights to create more relevant campaigns in Display & Video 360. The result was a 20 percent increase in revenue.

Now, with Google Marketing Platform, we’re introducing ways to make our products work even better together. For example, the new Integration Center helps you discover and easily setup valuable connections between products.

Google Marketing Platform also supports 100+ integrations with exchanges, measurement solutions, and other technology providers. In short, you can choose what media you buy, how you buy it, and how you measure it.

Search Ads 360 and Display & Video 360

With Google Marketing Platform, we’re also making changes to some of our advertising products.

Search Ads 360 is the new name for DoubleClick Search. Search Ads 360 will continue to help you plan, buy, and measure your search campaigns on Google and other search engines.

Display & Video 360 brings together features from our display advertising products: DoubleClick Bid Manager, Campaign Manager, Studio and Audience Center. Display & Video 360 allows you to execute ad campaigns end-to-end in one place, creating efficiency in how you work and helping your teams do more together.

Don’t worry, Campaign Manager and other DoubleClick products aren’t going anywhere right away. We’ll gradually transition customers to Display & Video 360 as additional features become available.

Looking ahead

This is just the beginning of the next chapter in our platforms story. We’re committed to building solutions that help you achieve your marketing goals while meeting consumers’ high expectations for privacy, transparency and control.

We’ll be sharing more about Google Marketing Platform and Display & Video 360 at Google Marketing Live. Sign up to watch the live streamed keynote on July 10, 9:00 a.m. PT / 12:00 p.m. ET.


Google Analytics Blog


Open-source communities fight over telco market

February 28, 2019 No Comments

When you think of MWC Barcelona, chances are you’re thinking about the newest smartphones and other mobile gadgets, but that’s only half the story. Actually, it’s probably far less than half the story because the majority of the business that’s done at MWC is enterprise telco business. Not too long ago, that business was all about selling expensive proprietary hardware. Today, it’s about moving all of that into software — and a lot of that software is open source.

It’s maybe no surprise then that this year, the Linux Foundation (LF) has its own booth at MWC. It’s not massive, but it’s big enough to have its own meeting space. The booth is shared by the three LF projects: the Cloud Native Computing Foundation (CNCF), Hyperleger and Linux Foundation Networking, the home of many of the foundational projects like ONAP and the Open Platform for NFV (OPNFV) that power many a modern network. And with the advent of 5G, there’s a lot of new market share to grab here.

To discuss the CNCF’s role at the event, I sat down with Dan Kohn, the executive director of the CNCF.

At MWC, the CNCF launched its testbed for comparing the performance of virtual network functions on OpenStack and what the CNCF calls cloud-native network functions, using Kubernetes (with the help of bare-metal host Packet). The project’s results — at least so far — show that the cloud-native container-based stack can handle far more network functions per second than the competing OpenStack code.

“The message that we are sending is that Kubernetes as a universal platform that runs on top of bare metal or any cloud, most of your virtual network functions can be ported over to cloud-native network functions,” Kohn said. “All of your operating support system, all of your business support system software can also run on Kubernetes on the same cluster.”

OpenStack, in case you are not familiar with it, is another massive open-source project that helps enterprises manage their own data center software infrastructure. One of OpenStack’s biggest markets has long been the telco industry. There has always been a bit of friction between the two foundations, especially now that the OpenStack Foundation has opened up its organizations to projects that aren’t directly related to the core OpenStack projects.

I asked Kohn if he is explicitly positioning the CNCF/Kubernetes stack as an OpenStack competitor. “Yes, our view is that people should be running Kubernetes on bare metal and that there’s no need for a middle layer,” he said — and that’s something the CNCF has never stated quite as explicitly before but that was always playing in the background. He also acknowledged that some of this friction stems from the fact that the CNCF and the OpenStack foundation now compete for projects.

OpenStack Foundation, unsurprisingly, doesn’t agree. “Pitting Kubernetes against OpenStack is extremely counterproductive and ignores the fact that OpenStack is already powering 5G networks, in many cases in combination with Kubernetes,” OpenStack COO Mark Collier told me. “It also reflects a lack of understanding about what OpenStack actually does, by suggesting that it’s simply a virtual machine orchestrator. That description is several years out of date. Moving away from VMs, which makes sense for many workloads, does not mean moving away from OpenStack, which manages bare metal, networking and authentication in these environments through the Ironic, Neutron and Keystone services.”

Similarly, ex-OpenStack Foundation board member (and Mirantis co-founder) Boris Renski told me that “just because containers can replace VMs, this doesn’t mean that Kubernetes replaces OpenStack. Kubernetes’ fundamental design assumes that something else is there that abstracts away low-level infrastructure, and is meant to be an application-aware container scheduler. OpenStack, on the other hand, is specifically designed to abstract away low-level infrastructure constructs like bare metal, storage, etc.”

This overall theme continued with Kohn and the CNCF taking a swipe at Kata Containers, the first project the OpenStack Foundation took on after it opened itself up to other projects. Kata Containers promises to offer a combination of the flexibility of containers with the additional security of traditional virtual machines.

“We’ve got this FUD out there around Kata and saying: telco’s will need to use Kata, a) because of the noisy neighbor problem and b) because of the security,” said Kohn. “First of all, that’s FUD and second, micro-VMs are a really interesting space.”

He believes it’s an interesting space for situations where you are running third-party code (think AWS Lambda running Firecracker) — but telcos don’t typically run that kind of code. He also argues that Kubernetes handles noisy neighbors just fine because you can constrain how many resources each container gets.

It seems both organizations have a fair argument here. On the one hand, Kubernetes may be able to handle some use cases better and provide higher throughput than OpenStack. On the other hand, OpenStack handles plenty of other use cases, too, and this is a very specific use case. What’s clear, though, is that there’s quite a bit of friction here, which is a shame.


Enterprise – TechCrunch


Michael Cohen’s Credibility Has Never Been More Certain

February 28, 2019 No Comments

In his testimony before Congress Wednesday, Trump’s former fixer gave the most convincing narrative yet about Trump’s presidential run.
Feed: All Latest


It’s Time for Facebook to Become the Google Grants of Social Media

February 27, 2019 No Comments

Facebook Ads NonprofitWhenever I speak to Nonprofits (which is something I love to do), I always evangelize the importance of leveraging all of the online technology companies which offer “in-kind” services, especially Google Grants. However, for marketers in today’s world, Google Grants is simply not enough. Identifying with potential donors, volunteers and simple awareness has evolved way beyond the search engines and into our Facebook and Twitter feeds as we all crave instant news, gossip and basic information. In this post, I will discuss not only the steps that have already been taken by Facebook, but also how much more they need to do to fulfill their obligation to assist those organizations in need.

What Facebook needs to Learn from Google

In the early months of 2002, Google relaunched its AdWords platform with a new cost-per-click (CPC) pricing model that made it increasingly more popular and successful with both large and smaller companies. It was this achievement that opened the eyes of both the Google founders and other Google executives, to provide the same opportunity for Nonprofits by giving them free ads on Google.com. In essence, they believed that the Adwords platform would enable non-profits too reach a much larger audience and connect with the people who were searching for information about their specific cause or programs. As you will see below, it has grown by leaps and bounds….

Recent screenshot from the new Google Grants Blog:

 

Why Facebook Doesn’t Understand the Opportunity

After seeing the success of Google grants for the past 13 years, you would think Facebook would have a Nonprofit plan already in place to offer Free advertising to Nonprofits. However, it appears that even though they have made attempts to achieve this, it was simply not enough. According to the great article by AdWeek entitled: “Nonprofits Rely Heavily on Social Media to Raise Awareness“, author Kimberlee Morrison mentions that the social media presence is growing significantly for nonprofits. She goes on to say: “The report shows an increase of 29 percent in Facebook fans across all verticals and a 25 percent increase in Twitter followers. What’s more, there are big increases in sharing and likes from sources outside the follower base, so it would be wise for nonprofits to play to that strength on social sites if their aim is attracting a wider user base.

How Facebook Failed in its First Attempt

Back on November 15, 2015, The Nonprofit Times published an interesting article entitled “$ 2 Million In Facebook Ads Going To Nonprofits” in which Facebook announced in partnership with ActionSprout, that they will distribute $ 2 million in Facebook Ads credits during the holiday season. These Facebook Ads credits (up to $ 1,500 each) will be given out to roughly two-thousand nonprofits. According to author Andy Segedin, he states that …according Drew Bernard, CEO and co-founder. Organizations will receive credit allotments of $ 600, $ 900, $ 1,200 or $ 1,500 that will be granted from December through February. All applicants will be set up with a free ActionSprout account, Bernard said.

The article goes on to say: “Bernard hopes that the credit giveaway will help organizations post more and better content on Facebook. The company plans to publish key findings based off of the distribution and use of the credits, but will not move forward with any follow-up efforts until information is gathered. “This is a test to see what we can learn, and with what we learn we’ll all go back to the drawing board and see if there’s something we should do next with this”.

If you are interested in hearing more about the “key findings” of this test, your going to have to wait a little while and also give them your email address. (Not very Philanthropic)

Screen Shot 2016-05-06 at 10.17.19 AM

 

In Conclusion:

If you can tell by my tone, I am somewhat disappointed by Facebook’s lack of initiative with their efforts to help Nonprofits.  In my opinion, they offer a much stronger platform than Google Adwords based on their “intense” targeting as well as their “ripe and persuasive audience”. I am also quite shocked that they could not follow in the footsteps of Google’s 13 years of supporting Nonprofits with their Google Grants Programs. To end insultr to injury, I am also dumbfounded that they not only had to partner with another company but also label their efforts as a test to limited number of Nonprofits for just a couple month. What’s the point of a test, when you know Nonprofits could only benefit from the Free Advertising.

You almost get the sense that this was for the benefit for everyone else, except for the Nonprofit which needs it the most.


PPC Marketing Agency | Search Marketing Firm | Adwords Certified Consultant


From basecamp to summit: Achieving new heights with Google Marketing Platform Partners

February 26, 2019 No Comments

Earlier this week we announced Google Marketing Platform, which brings together DoubleClick Digital Marketing and the Google Analytics 360 Suite into a single solution to plan, buy, measure and optimize customer experiences across channels and devices. But we all know having great technology is only part of the solution. You also need people with the expertise and knowledge to fully take advantage of everything the technology enables. It’s not unlike relying on Sherpas to help guide you from basecamp to the summit. You may be able to make the ascent on your own, but engaging a team of experts with a track record of success greatly improves your chances of making the summit. That’s why we’re excited to announce Google Marketing Platform Partners, a new program designed to ensure you have access to all the resources you need to get the most value from Google Marketing Platform.

A robust ecosystem of skilled practitioners and companies

More than just a replacement for the existing programs, Google Analytics Certified Partners and the DoubleClick Certified Marketing Partners, the new program is designed to provide a robust ecosystem of resources, no matter your needs. The foundation of the program is scaled training and capability-building across all the Google Marketing Platform products. Whether you’re looking to build skills in-house or partner with a service provider, the program helps ensure the needed skills and resources are readily available. With more than 500 companies in the program at launch, including leading interactive agencies, system integrators, and top technology, data and media companies, you’ll be able to find a partner to support multiple facets of your business.

Three unique designations

From skill-building to broader, strategic partnerships and technology reselling, the program is designed to deliver the range and quality of expertise you expect:

Certified Individuals: To help increase the talent pool available supporting the Google Marketing Platform, individuals will be able to access a growing library of self-study materials and complete individual product certifications. Successful completion signals an individual’s expertise with specific Google Marketing Platform products.
Certified Companies: Certified Companies provide consulting, training, implementation, operations and technical support services for Google Marketing Platform. These companies not only have individuals certified in one or more products, but they have a high level of knowledge, practical and industry experience, as well as stellar customer references. These strict requirements ensure they have both the expertise and a proven ability to deliver results.
Sales Partners: Sales Partners are Google Marketing Platform experts, just like Certified Companies, but partner more closely with Google in providing consulting and support services, in addition to selling the technology on our behalf.

Get started today

Whether you’re looking to add talent to your team, up-level your current talent, or complement your team with a partner company, Platform Partners offers a trusted source to help close the gaps. And we’ll continue to build out additional skill-building resources, refine our certifications and add new partners covering more countries and languages. To get started on taking your marketing to even greater heights, browse our current Partners to find a partner equipped to help you get the most from your investment in Google Marketing Platform.


Google Analytics Blog


Say hello to Microsoft’s new $3,500 HoloLens with twice the field of view

February 26, 2019 No Comments

Microsoft unveiled the latest version of its HoloLens ‘mixed reality’ headset at MWC Barcelona today. The new HoloLens 2 features a significantly larger field of view, higher resolution and a device that’s more comfortable to wear. Indeed, Microsoft says the device is three times as comfortable to wear (though it’s unclear how Microsoft measured this).

Later this year, HoloLens 2 will be available in the United States, Japan, China, Germany, Canada, United Kingdom, Ireland, France, Australia and New Zealand for $ 3,500.

One of the knocks against the original HoloLens was its limited field of view. When whatever you wanted to look at was small and straight ahead of you, the effect was striking. But when you moved your head a little bit or looked at a larger object, it suddenly felt like you were looking through a stamp-sized screen. HoloLens 2 features a field of view that’s twice as large as the original.

“Kinect was the first intelligent device to enter our homes,” HoloLens chief Alex Kipman said in today’s keynote, looking back the the device’s history. “It drove us to create Microsoft HoloLens. […] Over the last few years, individual developers, large enterprises, brand new startup have been dreaming up beautiful things, helpful things.”

The HoloLens was always just as much about the software as the hardware, though. For HoloLens, Microsoft developed a special version of Windows, together with a new way of interacting with the AR objects through gestures like air tap and bloom. In this new version, the interaction is far more natural and lets you tap objects. The device also tracks your gaze more accurately to allow the software to adjust to where you are looking.

“HoloLens 2 adapts to you,” Kipman stressed. “HoloLens 2 evolves the interaction model by significantly advancing how people engage with holograms.”

In its demos, the company clearly emphasized how much faster and fluid the interaction with HoloLens applications becomes when you can use slides, for example, by simply grabbing the slider and moving it, or by tapping on a button with either a finger or two or with your full hand. Microsoft event built a virtual piano that you can play with ten fingers to show off how well the HoloLens can track movement. The company calls this ‘instinctual interaction.’

Microsoft first unveiled the HoloLens concept at a surprise event on its Redmond campus back in 2015. After a limited, invite-only release that started days after the end of MWC 2016, the device went on sale to everybody in August  2016. Four years is a long time between hardware releases, but the company clearly wanted to seed the market and give developer a chance to build the first set of HoloLens applications on a stable platform.

To support developers, Microsoft is also launching a number of Azure services for HoloLens today. These include spatial anchors and remote rendering to help developers stream high-polygon content to HoloLens.

It’s worth noting that Microsoft never positioned the device as consumer hardware. I may have shown off the occasional game, but its focus was always on business applications, with a bit of educational applications thrown in, too. That trend continued today. Microsoft showed off the ability to have multiple people collaborate around a single hologram, for example. That’s not new, of course, but goes to show how Microsoft is positioning this technology.

For these enterprises, Microsoft will also offer the ability to customize the device.

“When you change the way you see the world, you change the world you see,” Microsoft CEO Satya Nadella said, repeating a line from the company’s first HoloLens announcement four years ago. He noted that he believes that connecting the physical world with the virtual world will transform the way we will work.


Enterprise – TechCrunch


Facebook expands its internet infrastructure projects

February 26, 2019 No Comments

Like every year, Facebook is using MWC Barcelona to focus on its infrastructure projects. While you may mostly think of Facebook as a social network, the company started launching infrastructure projects for bringing more people online (and onto its network) many years ago.

These projects include things like the (now-cancelled) solar-powered Aquila drone and plenty of open-source software and hardware initiatives for carriers. Indeed, there are so many projects that range from physical devices and networks to software that it’s sometimes hard to keep up. That wide range is by design, though.

“The one thing that has been consistent since the very beginning is that there’s no silver bullet,” Facebook director of engineering Yael Maguire told me during an interview at MWC. “We try to contribute to different parts of the ecosystem. The ecosystem could be in dense urban markets where we’re doing things like Terragraph, or rural markets where we are doing Express Wi-Fi.”

At MWC, the company announced a number of new partnerships and projects that expand on its existing projects.

Maybe the most interesting of these projects is called Internet para Todos (IpT) Peru. What Facebook is trying to show here is that it’s possible to create an economically viable provider of rural mobile infrastructure. Facebook is building this together with Telefonica, IDB Invest and CAF (Development Bank of Latin America). It’s an open access network that will be open to all carriers. “It is very economically challenging to think about connecting small communities in rural parts of Peru, let alone other parts of the world,” Maguire said. “The idea is that we can create common infrastructure that is open access, let others innovate on business models and create competition etc. The hope is that a business case can close for IpT.” Over time, Facebook hopes to bring this model to other places, too — assuming it works, which Maguire admits is not a given since this is very much an experiment at this point. If the model works, though, then the hope is that commercial vendors will see that there’s money to be made by connecting these small rural communities.

As the company also announced today, Facebook is investing in a new 750km open-access fiber project in Nigeria, for example, which will provide fiber access to more than one million people. Facebook is co-investing in this project with a number of local state authorities. The company previously worked on a similar project in Uganda and as Maguire noted, it learned quite a bit from this experience, including how to make laying fiber through large bodies of water more economically viable. But it’s not just the logistics, it’s also working with the local bureaucracy — which Maguire says is harder than the technical challenges. “There’s not a lot of new technology that we are inventing for this right now,” he said, and also acknowledged that these are relatively small projects. But as the company learns, it plans to scale up these efforts and launch more projects in Africa, Latin America and Asia-Pacific.

The company is also announcing new partners for its Express Wi-Fi service, including Cell C in South Africa, Vodafone in Ghana and Globe in the Philippines. That’s on top of other partnerships in India, Nigeria, Kenya, Tanzania and Indonesia. The idea of Express Wi-Fi is to work with internet providers and mobile operators to help them build their Wi-Fi businesses and to give local entrepreneurs the tools to provide internet access to their neighbors.

As far as open-source projects go, Facebook also today announced the launch of Magma, a new open-source platform that makes mobile network deployments easier for carriers. The launch partner for Magma is Telefonica, which is using it in Latin America, and BRCK, which is using it to pilot a new LTE network in Kenya.

Terragraph, one of the company’s most successful open-source infrastructure projects that helps bring high-speed connectivity to urban and suburban communities, is now seeing new trials in Athens, Greece and Curitiba, Brazil and it’s already in production usage in Canon, Ohio and Penang, Malaysia, as well as Alameda, California.

Those are still small-scale projects, though, even if the local impact is huge. What’s maybe more important, though, is that it’s seeing increased support from hardware vendors, which now include MikroTik and Cambium Networks, in addition to Nokia and Radwin, which previously came on board.

One thing Maguire also noted is that Facebook remains as committed to these infrastructure projects as it has ever been. “We are trying to make sure we are learning and reflecting on everything that is happening and it’s important that we understand the role we play in all of this, but it’s super important and tied to the mission of what we do,” he said.


Social – TechCrunch


Google Patent on Structured Data Focuses upon JSON-LD

February 26, 2019 No Comments

Search Using Structured Data

Structured Data is information that is formatted into a repository that a search engine can read easily. Some examples include XML markup in XML sitemaps and schema vocabulary found in JSON-LD scripts. It is distinct from semi-structured, and unstructured data that have less formatting.

A search engine that answers questions based upon crawling and indexing facts found within structured data on a site works differently than a search engine which looks at the words used in a query, and tries to return documents using unstructured data which contains the same words as the ones in the query; hoping that such a matching of strings might contain an actual answer to the informational need that inspired the query in the first place. Search using Structured Data works a little differently, as seen in this flowchart from a 2017 Google patent:

Flow Chart Showing Structured Data in a Search

In Schema, Structured Data, and Scattered Databases such as the World Wide Web, I talked about the Dipre Algorithm in a patent from Sergey Brin, as I described in the post, Google’s First Semantic Search Invention was Patented in 1999. That patent and algorithm described how the web might be crawled to collect pattern and relations information about specific facts. In that case, about books. In the Google patent on structured data, we see how Google might look for factual information set out in structured data such as JSON-LD, to be able to answer queries about facts, such as, “What is a book, by Ernest Hemingway, published in 1948-1952.

This newer patent tells us that it might solve that book search in this manner:

In particular, for each encoded data item associated with a given identified schema, the system searches the locations in the encoded data item identified by the schema as storing values for the specified keys to identify encoded data items that store values for the specified keys that satisfy the requirements specified in the query. For example, if the query is for semi-structured data items that have a value “Ernest Hemingway” for an “author” key and that have values in a range of “1948-1952” for a “year published” key, the system can identify encoded data items that store a value corresponding to “Ernest Hemingway” in the location identified in the schema associated with the encoded data item as storing the value for the “author” key and that store a value in the range from “1948-1952” in the location identified in the schema associated with the encoded data item as storing the value for the “year published” key. Thus, the system can identify encoded data items that satisfy the query efficiently, i.e., without searching encoded data items that do not include values for each key specified in the received query and without searching locations in the encoded data items that are not identified as storing values for the specified keys.

Structured Data and JSON-LD

It was interesting seeing Google come out with a patent about searching semi-structured data which focused upon the use of JSON-LD. We see them providing an example of JSON on one of the Google Developer’s pages at Introduction to Structured Data

As it tells us on that page:

This documentation describes which fields are required, recommended, or optional for structured data with special meaning to Google Search. Most Search structured data uses schema.org vocabulary, but you should rely on the documentation on developers.google.com as definitive for Google Search behavior, rather than the schema.org documentation. Attributes or objects not described here are not required by Google Search, even if marked as required by schema.org.

The page then points us to the Structured Data Testing Tool, to be used as you prepare pages for use with Structured Data. It also tells us that for checking on Structured Data after it has been set up, the Structured Data Report in Google Search Console can be helpful, and is what I usually look at when doing site audits.

The Schema.org website has had a lot of JSON-LD examples added to it, and it was interesting to see this patent focus upon it. As they tell us about it in the patent, it seems that they like it:

Semi-structured data is self-describing data that does not conform to a static, predefined format. For example, one semi-structured data format is JavaScript Object Notation (JSON). A JSON data item generally includes one or more JSON objects, i.e., one or more unordered sets of key/value pairs. Another example semi-structured data format is Extensible Markup Language (XML). An XML data item generally includes one or more XML elements that define values for one or more keys.

Machine Readable Extraction of Facts

I’ve used the analogy of how XML sitemaps are machine-readable, compared to HTML Sitemaps, and that is how JSON-LD shows off facts in a machine-readable way on a site, as opposed to content that is in HTML format. As the patent tells us that is the purpose of this patent:

In general, this specification describes techniques for extracting facts from collections of documents.

The patent discusses schemas that might be on a site, and key/value pairs that could be searched, and details about such a search of semi-structured data on a site:

The aspect further includes receiving a query for semi-structured data items, wherein the query specifies requirements for values for one or more keys; identifying schemas from the plurality of schemas that identify locations for values corresponding to each of the one or more keys; for each identified schema, searching the encoded data items associated with the schema to identify encoded data items that satisfy the query; and providing data identifying values from the encoded data items that satisfy the query in response to the query. Searching the encoded data items associated with the schema includes: searching, for each encoded data item associated with the schema, the locations in the encoded data item identified by the schema as storing values for the specified keys to identify whether the encoded data item stores values for the specified keys that satisfy the requirements specified in the query.

The patent providing details of the use of JSON-LD to provide a machine-readable set of facts on a site can be found here:

Storing semi-structured data
Inventors: Martin Probst
Assignee: Google Inc.
US Patent: 9,754,048
Granted: September 5, 2017
Filed: October 6, 2014

Abstract

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for storing semi-structured data. One of the methods includes maintaining a plurality of schemas; receiving a first semi-structured data item; determining that the first semi-structured data item does not match any of the schemas in the plurality of schemas; and in response to determining that the first semi-structured data item does not match any of the schemas in the plurality of schemas: generating a new schema, encoding the first semi-structured data item in the first data format to generate the first new encoded data item in accordance with the new schema, storing the first new encoded data item in the data item repository, and associating the first new encoded data item with the new schema.

Take Aways on Structured Data Use

By using Structured Data such as in Schema Vocabulary in JSON-LD formatting, you make sure that you provide precise facts in key/value pairs that provide an alternative to the HTML-based content on the pages of a site. Make sure that you follow the Structured Data General Guidelines from Google when you add it to a site. That page tells us that pages that don’t follow the guidelines may not rank as highly, or may become ineligible for rich results appearing for them in Google SERPs. Another Google Page about Structure Data is a guide page titled Understand how structured data works, which contains links to helpful pages to enable you to learn more about how structured data is used at Google.

And if you are optimizing a site for Google, it also helps to optimize the same site for Bing, and it is good to see that Bing seems to like JSON-LD too. It has taken a while for Bing to do that (see Aaron Bradley’s post, An Open Letter to Bing Regarding JSON-LD.) It appears that Bing has listened a little, adding some capacity to check on JSON-LD after it is deployed: Bing announces Bing AMP viewer & JSON-LD support in Bing Webmaster Tools. The Bing Markup Validator does not yet help with JSON-LD, but Bing Webmaster Tools now helps with debugging JSON-LD. I like using this Structured Data Linter myself.


Copyright © 2019 SEO by the Sea ⚓. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact SEO by the Sea, so we can take appropriate action immediately.
Plugin by Taragana

The post Google Patent on Structured Data Focuses upon JSON-LD appeared first on SEO by the Sea ⚓.


SEO by the Sea ⚓


HTC Exodus 1 Review: Blockchain Dreams

February 26, 2019 No Comments

A smartphone that doubles as a crypto wallet, the Exodus makes for a solid device—as long as you level your expectations.
Feed: All Latest


Excel Tips for the Time-Crunched Marketer

February 26, 2019 No Comments

Let’s spend less time and gain more results. Who’s with us?

Read more at PPCHero.com
PPC Hero


©2018-2020 - American Online Phonebook - All Rights Reserved

Privacy Policy | Terms & Conditions