The Bogus Call to Arms Against

Update (3 hours after initial post)

Manu Sporny, W3C’s Chair of the RDFa group, read this article and provides a great response in the comment thread, to which I responded. So be sure to read the comments after reading the article. 🙂


In an ideal world of “everything is free”, crowd-sourcing, and W3C standards, all major decisions that affect the masses should only be decided by consensus of the masses.  That’s a concept that, in regard to the evolution of the web, needs to be seen for what it is.  Fantasy. It’s not reality by any stretch of the imagination. was long overdue, because there were too many competing choices and the search engines desperately need help in the process of identifying quality content.  And by collaborating on the Schema model, the big three unilaterally set the stage for a major step toward that cleansing, in a way that traditional “open” standards could never do.

Tonal Disclaimer

This is one of my harshest articles in a while.  Some of you may get really upset at Alan going so dark on you.  Well, let’s just say that this is an article I believe would not have as much of an impact if I sugar coated it.  It’s an opportunity for some of you to pause and open your eyes.  And if you do, I also believe you’re going to thank me for it afterward.

Giddy With Laughter

When Schema was first going viral on Twitter, I jumped over there to see what the big 3 came up with.  And within a few minutes, it was crystal clear to me that we’re seeing a paradigm shift in the search industry unfold before our eyes.  As imperfect as this first iteration is, it solves so many problems that I was giddy with laughter.

I immediately sent a directive to my agency client’s dev teams – – read it, learn it, implement it.  No arguments. No delay.

I then tweeted the urgency and the opportunity before the search community that night, during #SEOChat.  And then wrote an article for Search Engine Journal entitled “Anticipating SEO in 2012 – Competitive Advantage“.

In that article, which went live while I was up at SMX Advanced,  I communicated how I saw a situation where people who get with the Schema program are going to have a competitive advantage.  Because, in my opinion (at the time it was JUST an opinion – today I KNOW it will be), Schema is going to be a ranking factor for organic search.

Searcher Intent, By Itself, Is Half A Pie

Up at SMX Advanced this past week, I was on the Google Survivor Tips Panel.  And as part of my prep for that panel, I’d read Eric Enge’s interview of Stefan Weitz, Director of search at Microsoft.  In that article, Stefan described how they’re moving from reading words on pages as if they’re nouns, to a time in the not-distant future where they’re going to start reading them and interpreting them as though they’re verbs.

What that’s about is the need to better understand the intent of the site owner in what they’re communicating their product or service offerings are about.  Which, if successful, will allow the engines to better match that data with searcher intent.

And from that interview, it was clear to me already that people are going to have to do a better job at being crystal clear on the intent of their site’s message. Which I already knew. Because most of the sites I audit turn out to do a TERRIBLE job of communicating their highly refined intent.

Because it’s usually not very refined at all.  And with a plethora of coding methods out there, and no agreement on standardized solutions, sites struggle to communicate this.

Along Comes – The Other Half Of The Intent Pie

During that initial review I did of Schema, Stefan’s words came back to me in the blink of an eye – here, in this newly launched system, was a very powerful way for sites to better communicate their intent! All the way down to the granular level, if you execute properly.  And not just in headers and off-page stuff.  We’re talking the beef of any web site – the core content.

Confirmation – It’s Going To Be A Ranking Factor

On the last day of SMX, the morning keynote (just before my session), featured Stefan Weitz.  He talked about, and how it’s going to help the search engines understand intent so much more.  And when it came time for taking questions from the audience, I asked if my seeing the connection between his interview and Schema had been accurate, he said yes – absolutely.

And when asked if Schema was a ranking factor, he said not initially.  But it will.

After the keynote, I went up and spoke to Stefan briefly – and we shared a laugh about how obvious it should be that it’s going to be a factor and that some people just don’t get it.  Which was a bonus for me, because, as you can see from the image above, it was the very last slide in my presentation deck.  And having that slide perfectly matched by the morning keynote speaker just rocked my world 🙂

Angry Birds Flapping Their Wings Frantically – Fighting The Prevailing Winds

While some people got it, as I did, right away – like Aaron Bradley over at SEO Skeptic, many in our community just found another reason to rant. With the general tone this time being “wait and see” or “we already have microformats, RDFa, etc. – why do we need search engines dictating to us?” – and that’s the kicker.  It’s a typical reaction to change that’s so significant, so massive, that it’s unsettling.

And of course, since it’s something that the big 3 came out with, it’s another reason to hate on the big 3.  Evildoers, that they are. “What about the little guy, who’s going to be at an even bigger disadvantage?” “Oh why can’t we just do what we want?” “Why should I have to change?” “The only reason they’re doing it is to scrape our content”…

Birds of a Feather…

And as soon as this whole thing started really spreading, developers far and wide started screaming.  Not all – some saw the blessing that this really is. Yet many cried foul.  Insulted.  Put off.  Angry that they’re not going to be allowed to continue doing whatever the heck they please anymore.

Myopic Thinking Rears Its Ugly Head

In my presentation at SMX Advanced, I spent the majority of my time (a very generous 18 minutes, thank you very much Danny Sullivan!), I showed comparisons of two types of SEO.

And I wasn’t referring to “the hat that shall remain nameless”.

I was talking about Myopic SEO and Sustainable SEO.

To me, Myopic SEO is stuck in the mud.  It’s limited in its vision.  It’s a major hindrance to sites gaining maximum recognition for their content. Which means it falls way short when seen through the eyes of the search algorithms.

You remember those.  The processes by which search engines determine whether your content is the most relevant for a specific search.

Well, anyhow- some people are all up in arms, railing against, wanting to “take back our web“.  Yes, that’s right. Manu Sporny, the Chair of the W3C group that created RDFa, wants to FIGHT BACK. Read his article.  Then come back.  But first, be sure to put down your coffee before reading his article,  else you spit coffee all over the place laughing.

Manu Sporny, I Feel Your Pain.

Look, I really understand why someone who spent years of his life championing one of several alternative markup solutions would be so upset.

RDFa was first proposed in 2004.  It took three years just to get to the first public working draft.

Another year to reach recommendation status.

That’s four years just to reach recommendation status!

As the Chair of the group, Manu probably has had more sleepless nights than many people experience in a lifetime.  What with all the bickering, and hemming and hawing, that’s inherent to the 20th Century methods required by W3C protocol, and of course, due to the fact that many people who participate in “open” standards have a hidden agenda, which gums up the process.

Then there’s archetypal reality in play – get ten code monkeys in a room, and you’ll get 14 “solutions” for a single problem. Throw a few Project Managers into the mix, a couple UX people, and some corporate spies forced to participate at the directive of their suit-and-tie employer, on the premise that “we have to have a say in this”…  Yeah – it becomes a long, slow, and completely bogged down process.

Except there’s a problem with RDFa.

it’s  just one of a variety of methodologies to come along over the years from the web community at large. And that is the bigger problem. Competing solutions, none of which reached, to this point, the status of “the only solution from a best practices perspective”.

“It’s Not My Problem” Syndrome

Over the years, as search has become exponentially more complex, from time to time, I’d hear someone say “it’s not my job – that’s Google’s responsibility.”  As if Google has the magical power to figure everything out, without our help.  Which is complete bullshit. For all the times we’ve heard “let google figure it out” (Even by Matt Cutts over and over), we all know search quality sucks for many topics. And it IS partially our responsibility.

Don’t believe me?

Why do you think Google invented the Canonical tag?  Or  Why do you think they started encouraging the use of Breadcrumbs as a signal?  Or Microformats/microdata/RDFa in general?

Because they need our help.  #DUH

Sustainable SEO

In my presentation up in Seattle, I mentioned that Sustainable SEO is vital. It’s forward thinking.

Sustainable SEO anticipates, evolves.

And those of us who take the time to think like business owners, will have no problem understanding the importance of or that it’s already here to stay.   It’s not one of those “wait and see” situations at all.

Welcome To The Business World

SO okay – they need our help.  Sue them.

While you’re suing them, I’ll be helping my clients push your site results even further down in the SERPs.  Because I understand that is an answer to the search industry’s prayers.  And it’s an answer to site owners prayers.  Yeah – the people who pay all you developers and SEOs your salary, your hourly wage, your consulting fee.

That’s Right – is a brilliant Godsend for business owners.  People without whom, you wouldn’t have a thriving and ubiquitous web, that now permeates every cell of your being 24x7x365.

It’s a Godsend because what amounts to a cumulative millions of hours of wasted search effort on the part of searchers is that much closer to being repurposed.  It’s a Godsend because more site owners are going to be able to rise to the top of search results.  Which will mean they’ll make more money.  And that money will partly go to pay their employees.  Who have families to feed.

It’s a Godsend to millions of Project Managers around the word, who won’t have to deal with bickering code monkeys when it comes to “which method do we choose”.

It’s a Godsend to me, because I’m going to work on a way to help developers automate as much of the implementation of filling in the data into Schema elements.  And that’s going to help me in the overall SEO consulting process because programmers who get it will love me for that.

And for ALL of these reasons, within the business realm, the big 3 took action that was long overdue.  And would still be years out if not for their unilateral decision.

But Open Standards Are Just Around The Corner

Yeah – that’s right Just recently, the W3C finally moved one step closer to reaching another standard.  Which, however, would not have stopped countless engineers from doing things their way anyhow.  Either by NOT implementing any micro-method at all, or by going with something other than the W3C standard.

How do I know this?  Look at the web today.  It’s a mess.  HTML 4 – yeah that competed against XHTML.  And still does.  Even as HTML 5 is rolling down the tracks.  And code validation in any of them still sucks for the vast majority of sites on the web.

So if, after sixteen years in this business, I have YET to see a single “standard” that’s come from the web dev community be handled properly, let alone reached critical mass as the single, consistent method of d0ing things, how the heck do you think you’re going to convince me that you can get it right this time?

Be Patient, You Say

Ha!  Be patient.  While the world moves forward, driven by the needs of the business world, you go ahead.  Keep that pipe dream alive.  In the mean time, the closer the big 3 can get to improving search quality, the sooner I can get back to playing some Madden NFL.

Sometimes Children Need To Be Told What To Do

When a business owner needs to get every competitive advantage they can in order to succeed, it’s that need that drives business decisions. And since many of us are already fully on board with the importance of and how it solves so many problems (as imperfect as it is), you’re going to be very upset when you are fired by your boss/client when they hear you failed to get them higher up in search results because of your stubborn belief system.

But Its Too Complex

Some are saying it’s too complex.  To understand.  Or to implement.

That’s okay – you can start with the basics.  For now, they’ll only be using schema for display purposes – things like events, and recipe’s and business addresses and such.  And you can get an intro primer over at Authority Labs – where Dawn Wentzell did a great job in her “Implementing Microdata” article.

And keep an eye open as more info comes out on rolling out microdata.  Because believe me, it’s coming.

About Alan Bleiweiss

Just another guy. Who happens to have a lot of experience living, breathing and sleeping organic SEO. So that's my primary focus - high end SEO audits and consulting for sites ranging from thousands to tens of millions of pages. In my spare time I blog, rant, write eBooks, and speak at industry conferences.

Read more from


  1. Significant? Yes. Change? Well, not really. It *is* obvious this is going to become a ranking factor. Google, at least, has been heading in this direction for a long time, since they first started implementing their own standard for product feeds, and more recently with the Recipe search. Anyone who’s been paying attention should have seen this coming.
    Dawn Wentzell´s last blog ..SERP Weirdness- Taco BellMy ComLuv Profile

    • Hi Dawn – I think it is change. While they’ll continue to allow “competing” formats, its’ bad coding to mix and match. So wise developers will just throw out the others. And I refuse to allow client devs to use multiple formats. As has been communicated by Google, it could cause confusion to search bots.

      So in that regard, it’s change because they’re moving firmly in the “get your code act together” camp.

      I wouldn’t be surprised that by 2013 they start giving validation some ranking weight.

  2. garethjax says:

    i was already a fan of well STRUCTURED informations in microformats, now it’s officially “canon” from the big 2,5! (i can’t count yahoo as a full S.e. sorry!)

  3. Doc Sheldon says:

    A very well presented case, Alan.

    When I first saw Manu’s rather emotional piece, I was dismayed to see the big 3 issuing what amounts to a death warrant to RDFa. I’m also not wild about the notion of the search engines being able to so easily override what is supposed to be our standards body. I’ve been a proponent of RDFa for some time, and I still believe it is better suited to the purpose than microdata.

    However, as you point out, W3C is impossibly bogged down in bureaucracy, conflicts of interest and discord. Left to their own devices, they would probably eventually come up with a workable solution. But I’m nearly sixty years old… I’d like to be alive to see it released.

    I would prefer to have seen this come from W3C. It would have validated their existence somewhat, and a subsequent adoption by the search engines would have brought the same result, albeit tardy.

    I would also prefer if RDFa had been chosen over microdata or microformats, as I see it as much more extensible and scalable. I have no doubt, however, that microdata will evolve to bring a similar level of utility.

    Bottom line, I agree that this is a good thing for the industry, as it will finally provide the necessary incentive for implementation.

    And by the way, to those that see it as too complex… it really isn’t. Especially when you weigh the potential benefits. And as pointed out, those benefits will befall someone… either you or your competition.

  4. Ken Jansen says:

    I am excited for schema and helping the end users get better results. Seems like it will be a lot of writing for large pages but if it bumps me up it will worthwhile. Thanks.

  5. Manu Sporny says:

    Hi Alan,

    My name is Manu Sporny – current chair of the RDFa Working Group at W3C – however, I’m not responding in any official capacity here. This is my personal opinion. 🙂

    You’re mis-representing my blog post: “Manu Sporny, the Chair of the W3C group that created RDFa, wants to FIGHT BACK.”

    At no point in the article do I say “fight” or “war” or any sort of overly-combative term like that. If you knew me personally, you would know that I don’t view the world like that. I don’t think anybody should “win” or “lose” or “back the wrong horse” or any of the other phrases used to oversimplify the situation or marginalize opinions on what this means for the future of the Web. We should make these decisions consciously.

    The post was written to ensure that Web authors, developers and publishers know that they have a voice in this and if they feel concerned about this development, to let Google know. We’re Web geeks that care about the Open Web and we want technical solutions that will ensure that continues to be the case. We want choice. I want everyone to co-operate with the folks and the search companies – but before we do that, the folks have to know that people have very mixed feelings about how they went about creating this site and picking their winner.

    I’m not only a part of the RDFa community, but I’m also a long time member of and designer in the Microformats community. Like many of the people in these communities, I believe in the power of structured data on the Web. When we make decisions on this stuff – we should make sure that short-term gains aren’t overshadowed by long-term losses.

    I’m a small business owner as well – . I understand and sympathize with your points from a business owner and SEO perspective. We have mouths to feed as well. I know exactly what it feels like when your SERP position drops – it can crush your company. We’ve experienced that with one of our previous start-ups – the sick, sinking feeling you get when you drop off of the front page of a Google search because you understand that a drop in SERP position will eventually lead to layoffs.

    We want to help web developer SERP position – we are very much for that! We would also like Google and Microsoft to work with the rest of the community to make that happen.

    Taking this from another angle – if the solution that Google and Microsoft proposed would work for everyone, there would’ve been far fewer complaints than there have been. There are a great number of people that are upset about this – not because their solution wasn’t picked, but because of the ramification of the decision on the Open Web. Let’s see if we can all strike the right balance.

    The announcement isn’t all bad – this is a huge win for SEO and structured data on the Web. The search companies are finally saying – “Yes, we’d like your help.”. That’s huge.

    We just need to tweak a few things on to make sure we’re not accidentally preventing a more fruitful future from coming to pass. In order to do that, we all need to work together to create a rising tide that lifts all boats.
    Manu Sporny´s last blog ..5 RDFa Features Inspired by Microdata and MicroformatsMy ComLuv Profile

    • Manu,

      Thank you for taking the time to provide such a well communicated comment on my post. While I do agree that there are serious concerns regarding the first iteration of the structure, I’m convinced that had Kavi Goel reached out in advance, it would have led to even more drawn out delays that are a natural part of the open web process. Yet that’s the problem. How much more delay would have been viable?

      The search engines have been hammered for far too long regarding the quality issue. And pouring more and more money into accommodating the competing solutions became outweighed by the need to take action. I thought Kavi was pretty clear in his participating in the SemTech discussion where he stated

      Kavi Goel: We were expecting consolitdation. Then microdata came up as well, and we did a 3rd version. There was no consensus being reached over time, both in syntax and vocab. it was getting worse, not better. Multiple standards, without support for one, and this looked bad.

      I am, however, also slightly encouraged by their communicating that they’re open to consensus input over time. Stefan Weitz stated during his SMX Advanced keynote that if enough consensus can be made in terms of wide scale adoption of other parties extending on their own, they’ll integrate those new types and elements.

      So it will be interesting to see if the open web community actually can come to enough consensus in that regard.

      • Manu Sporny says:

        I’m convinced that had Kavi Goel reached out in advance, it would have led to even more drawn out delays that are a natural part of the open web process. Yet that’s the problem. How much more delay would have been viable?

        You forget that they did reach out before – for hRecipe and other Rich Snippets and it worked out fantastically well. There were barely any delays and many people use hRecipe now for recipe markup on the Web. They re-used and extended the Microformats hRecipe format and provided authors with the choice of using Microdata, RDFa and Microformats. Almost everyone was happy with that approach because they continued to be able to chose the syntax, and vocabulary, that worked for them.

        Now let’s generalize your assertion a little further – if they had tried to work with the various communities on, yes, they would’ve met push-back and it would’ve taken longer. That is because the solution proposed is dangerous to the Open Web. I have no idea how much longer it would’ve taken, but I think that’s where we may differ in opinion.

        You accept that it would’ve been nice if this was done through the “open web process”. I agree. There are many processes that could be considered “Open Web”. The World Wide Web Consortium (W3C) is just one of them, the Internet Engineering Task Force (IETF) is another – but it doesn’t stop there. The Microformats community would have been a reasonable place to do it. Public mailing lists would’ve been another. Just be open and work with the community – it will take longer, but we’ll all be better off for it.

        Would you trust just Microsoft to run Just Google? What about Oracle? What about Digital Bazaar (my company)? So, why is it that when these companies get together, all of a sudden it’s okay? I say that with no ill will toward any of those companies, but it’s concerning when individual corporations could take control of a part of the Web (either on purpose, or on accident). This is why folks are concerned about Net Neutrality – we don’t feel that we’re going to get treated fairly representation when it really matters.

        Standards are very difficult to get right. Imagine herding cats into a shower stall and then cranking the water up to full blast – cold. There are many moments when the process isn’t enjoyable and it feels like it is taking forever, but we do it because it is good for the Open Web. It takes so long because standards work is highly technical, very precise and is required to take input from the public. We are very serious about listening to those that have an educated opinion and then attempting to make sure their concerns are addressed.

        That’s why I’m spending part of my Saturday having this conversation with you. I (and the standards process) value your opinion and think that we should have this conversation in the open. We should be transparent. That takes a great deal of time, but again – it’s good for the Open Web.

        /If/ Google and Microsoft create a standardization body around that ensured that the vocabularies would reflect the needs of people creating structured data out there – that would be a good move in my book. I don’t share your opinion that they’re going to do that without a good number of us stating that we really, really want that to be the way this stuff operates.

        Don’t forget – /we/ own the Web along with the corporations. Our voices should be represented because we have just as much of a stake in this stuff as they do. We don’t need to be mindlessly idealistic about everything, but we should speak up when things tip out of balance.
        Manu Sporny´s last blog ..5 RDFa Features Inspired by Microdata and MicroformatsMy ComLuv Profile

        • Manu,

          I had not forgotten about the previous outreach. I did, however, consider the scope previously involved, as compared to the scope involved with an entire “solution”.

          Here’s where it gets very sticky. In addition to the honest believe that I have (as I can only assume the engine’s teams have, regarding the logistic magnitude of reaching true consensus, they ARE for-profit corporations, and that is the biggest problem to having the full dialogue you seek.

          Many of the reasons they’re going to be using them can not be given transparency, without exposing algorithmic insight they’re not going to reveal.

          So exactly how much longer it would take would depend on how easily they could engage in dialogue that required their skirting reasons for wanting to do things a certain way.

          People would be up in arms at every turn and every corner, and at every accusation or implication of their skirting the minutia.

          I actually think it was a brilliant move on their part to do it this way. Kicking and screaming is not necessarily a mature tactic. It is, however, a way to propel the divergence forward into uniform consistency with much more energy that I personally believe existed.

          And I will be happy to admit that one thing I’m keeping my eye on is what patents the engines come out of all this with in regards to the usage of the data.

          “Should” they be the arbiters for all the web? No. Not in the ideal world, as the opening line of my article states.

          Could this cause vast nightmares for many people, institutions and entities who use semantic markup for reasons OTHER than search? Oh, believe me – I absolutely believe that.

          For good or bad, it’s here now. And enough people were tired of waiting and are already climbing aboard the train that I’m confident they’re not going to undo this one.

          That’s why I’m embracing it with open arms, just as I have every other major paradigm shift that’s come from the Plex to date. And yes, I will rant ceaselessly at some of the more archaic challenges that entails.

          And wouldn’t have a problem simultaneously supporting the open web community getting a seat at the table to whatever degree they’re going to be willing to provide.

          Yet I am not of the mind that it’s going to be a full seat. Because I fully understand the deeper corporate reasoning behind the whole thing and pragmatism allows me to move on from that one, because for all my seeing value in open web, I’m not a purist there. SEO is my business first and foremost, and serving the needs of my clients is paramount.

  6. john allsopp says:

    The challenge here, as with microformats, rdfa, microdata etc is adoption by developers. This requires much more than just motivation (this will help your SEO).

    The lesson from the adoption of CSS, and better markup practice which took a decade or more to really take hold is it takes a strong community, and the attendant sites, resources, tutorials, forums and so on to bring about the sort of scale semantic markup will need in order for it to bring genuine benefits.

    Will search engines privilege .001% of web content because it is marked up using schemas – that’s the hope of early adopters, but its hard to see how that’s going to improve results. If anything, it will distort them.

    Prediction – remember knol? Right. Adoption, close to zero, despite huge enthusiasm, and mainstream press coverage on pre-launch.

    This stuff is hard. The technology, getting adoption, and getting the sort of scale required. Alienating the community that has been working for many years to make this work is far from the smartest first move.

    • John,

      Thanks for the insights. Definitely add value to this discussion.

      Knol? I don’t believe that’s a fair comparison. Sitemap.xml – yeah that’s more like it. As are breadcrumbs, permalinks, and even fundamental SEO. Developers who work on business sites are either going to get on board or be replaced. May not be this year, or next, or two years from now. It will happen enough though, over time. That’s my take on things. Could be wildly inaccurate in my prognostication. Would be the first time in my SEO life. And may need to happen just to humble me in that regard.

    • Norcross says:

      From a developer’s point of view, I don’t see this being as much of an issue to implement (other than the time itself) than issues of markup, i.e. HTML4 v HTML5, CSS, etc. While other standards such as HTML5 and CSS3 (which haven’t reached complete agreement yet) are still very much browser dependent, this guidelines can be done without any concern for the browser it will be viewed upon. The main difference here is that it’s all behind the scenes. With most of the web content today being generated by a CMS (WordPress being my method of choice), the markup can be implemented in the theme / template level and reach across the site: both existing and new content will be affected.

      • Thanks for commenting Andrew – I agree on all points. I’m hearing the same view from many developers. It is a straight forward implementation – at least the wrapping of content is. The hard part there is what to enter into the description fields. Personally, I’m already envisioning a way to automate much of that, though it’s going to be tricky in some Schema types, which will mean there’s going to need to be CMS based form fields that don’t currently exist.

        And THAT is the biggest concern I see long-term. Many people already do a crap job of using the CMS’s they’re given, unless they’ve got a professional SEO to hold their hands.

        Then again, that also means long-term job security. Again. 🙂

  7. webecho says:

    Firstly, good article and it’s great to see the conversation between you and Manu.
    I was very excited about this when I read the first couple of articles and visited for the details, that’s when my excitement faded a little.
    I think it’s sensible that they are trying to make a ‘standard’ format which will allow us to help them, ourselves and our clients by making the context of the content easier to understand.
    Where I think it’s lacking at the moment is in it’s ‘options’.
    I immediately went to look at the schema for Accommodation, Restaurants & Wineries as they are most relevant to the majority of my clients. It was disappoiting to see that Restaurants & Wineries are provided with identical details – I mean all the options available are the same except declaring one a winery and another a restaurant.
    Google et al will have already worked that out from the business name, page titles, urls and content. Why have a distinction if the options are identical?
    The Accommodation or ‘LodgingBusiness’ ( has anyone ever searched for a ‘lodging business in x’?) has a limited list of options, none of the accommodation clients I work for fit in any of those categories – they own resorts or holiday homes not hotel, motel, B&B’s.

    My main point is that without the option for developer input we’re stuck with dictated categories which, rather than improving relevance, will serve up less correct results as we’re forced to shoehorn clients into ‘best match’ categories.
    It’s left me torn between rejoicing that someone has ‘taken charge’ and made a decision and got it running and being concerned that it’s not going to be flexible enough for useful application by many.

    • Thanks Webecho. You’re not the only one concerned with the lack of flexibility in the first iteration. I’ll be keeping a keen eye open on how they’re going to address it beyond “leave a comment”. My educated guess on that is that for now, you’ll still have the ability to be discerned for niche market uniqueness – that it’s not going to shoe-horn anyone into a corner. Especially since initially it’s just going to be used for displaying more specific info within the SERPs for various aspects of a company’s offerings.

      And to me, that’s a good thing because people will still be just as likely to click through for the info they’d expect from a particular niche.

    • Doc Sheldon says:

      And there, Webecho, I think you’ve struck at the heart of it… with no input, we’ll be stuck with ‘best match’!

      • but guys – schema isn’t replacing or eliminating anything. It’s to enhance existing content. To extend it in a consistent manner. So input will only make it better, tho it won’t be needed to at least improve what’s already there as far as clarifying site owner communicated intent.

      • webecho says:

        I do applaud the fact that they have made a decision and implemented it, rather than allowing an endless ‘what if…’ cycle.
        I’m just hoping (begging, pleading) that they will use the ‘right, this is how it is, now help us improve it’ method.

  8. @cbullokles says:

    One of the most used microformats is position, coord or geoolocation, nevertheless looks like is so difficult to agree in a common name for it’s attributes.
    Using it as an example, in schema you defined elevation to describe altitude, why you need to create a different name for something already defined in w3c and used in many js implementations?
    Why don’t you reused it in order to avoid inconsistences in definitions? Regarding others like systeminfo we shold expect the same aproach?

    • Cristian – this is one of the primary examples I can see why schema is a less than stellar execution in its first iteration. And for the forseeable future, yes, you can probably expect a similar approach. The team is going to have to really get their act together for this to be a truly well embraced “solution”, otherwise a lot of sites are going to suffer due to rebellion.

      It won’t stop the train though – that’s already left the station.

  9. Terry Van Horne says:

    hmmm same story as before… there is absolutely no advantage to Schema over the others…other than the SE’s back it. They will still use the others it’s not an either or but a choice based on what works best for the developer. You go and on about adavatages…. but I’ve yet to see 1 real advatage to schema… all I see is something that won’t help ecommerce… which is by far where the most benefit from structured data is to be had. Page segmentation is covered in HTML 5… again why reinvent the wheel and add more code to the mix? I mean if you use theirs …you still need the stuff from html 5 cuz fcking browsers could care leass what SEs think they take their direction from a Governing body.

    All the SE’s started out supporting NoFollow guess what they now disagree on paid links using it that is a Google specific rule which has no backing from any other SE. So what happens when they don’t all agree here? Where’s the value add from

    • “They will still use the others it’s noa an either or…”

      That’s the way it is for now.

      It’s not a choice based on what works best for developers. The whole concept is developers never agree on methodology, vocabulary, any of it. And that’s why they came out with Schema. To adress that very issue while simultaneously going the extra step to start bringing clarity to content intent.

      Could it easily devolve down the road? Sure. Will it? assuming so in advance, in my opinion, is a mistake. Assuming it can be ignored is as well.

  10. james says:

    Thanks Webecho. You’re not the only one concerned with the lack of flexibility in the first iteration

Leave a Reply

CommentLuv Enabled