eMarketing101.net: Traffic Means Business   Contact UsSite Map

Previous Posts


August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
October 2014
September 2014
August 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008

Complete Archives



Search Marketing News


Black Hat SEO Techniques

Other Resources & Links

Blogging & RSS Promotion

Canadian Search Community

Canadian SEM Issues

SEM en Franšais

Domain Name Issues


Keyword Research

eMarketing 101 General


Free Webcast

eMarketing 101 Projects

Google *Stuff*

eMarketing 101 Promotion

En Franšais


Hopstudios Projects

International SEM

Love & Please Share

Link Building Best Practices


Musique (Francophone)

Video Content

PPC Planning


Search Engines Market Share

Search Marketing Smile

SEM *Must* Read!


eMarketing 101 News

PPC 101 Education

SEM Best Practices

SEM Events

SEM Glossary

SEM Studies & Research

SEM Whitepaper & Reports

SEM & Usability Experiments

SEM Local Events

SEO Advices for Beginners

SEO Planning for Beginners

SEMPO Canada Updates

SEO Tools

SEO Ranking Factors

Sports (Francophone)

Spectacular SEM Results


Vision & Future Trends

ROI & Results

Web Analytics

Web Copywriting

Web Strategy Partners

White Hat SEO Techniques

Category Archives


Out of my Gord - By Gord Hotchkiss

GrokDotCom - By Brian Eisenberg

Link Building Best Practices Blog

Search Engine Watch

SEM Hints: Search Engine Marketing Hints, Tips & Tools For Online Businesses

Search Engine Land - by Danny Sullivan

Virtual Marketing Blog: Internet Marketing News, Reviews and Insights

Search Insider

SEMPO Global Search Marketing Blog

ClickZ Online Marketing News

Pandia Search Engine News

Search Marketing Expo News

SEMPO Canada Search Marketing Blog

SiteProNews SEO Blog

Complete Blog List


  Web feed Main RSS feed

  Web feed Jobs feed

eMarketing news

From: google.com

Posted by adamf

I’m pleased to announce SEOmoz’s latest tool, FutureRank, is now available in beta and is free to anyone for the the next 48 hours (afterward it’ll be PRO only). FutureRank attempts to predict what might seem impossible: how your pages will rank next week, next month and next year. Before you dive in and try this exciting tool yourself, it’s important you understand how it works, and it’s limitations.

You start by entering a keyword, the URL of a page that currently ranks for that keyword, and the approximate current rank. (Why approximate? Because most of the prediction is based on the data in our existing data store; what you enter is used a start point for our prediction models.)



After clicking the button, it will take about 30 seconds to run the prediction model (we’ll try and keep you entertained while you wait). After the calculations are completed you’ll be presented with the predicted ranks for next week, next month and next year, and the corresponding level of confidence.


I know your wondering how could we be so confident with predictions so far in the future? Well, it depends on one big assumption that it’s important to understand before using the tool.

The accuracy depends on one big assumption

The FutureRank tool runs with one big assumption: that the SEO activities you’ve been performing for a given keyword and page will remain stable over the next 12 months. What does this mean? To use the Zappos screenshots above as an example: let’s pretend Zappos has been spending about 10 hours each month to try and rank for keyword Yellow Shoes (not likely, but let’s pretend). The tool assumes that Zappos will continue that same level of effort of 10 hours a month for the next week, month and year. For the tool to be accurate your level of effort towards optimization for a given keyword / page must remain constant. It doesn’t matter if that’s no effort put toward optimization or 40 hours a week, it just needs to remain constant.

The accuracy of the model also depends on the keyword because we have a varying amount of data for our each keyword that’s entered. Despite this, we were shocked that keywords that our system has never seen yielded surprisingly high accuracy—our brilliant engineers have told us they were able to approximate changes to a given SERP by analyzing root words in a given keyword phrase. The tool is less accurate however, with one word keywords that our system has never seen and the confidence scores in the tool will reflect this.

We’ve been testing the tool internally for the past few months and have been quite surprised by the accuracy. In the cases where the tool incorrectly predicted the rank, it was often keywords that we had been ignoring and began optimizing soon after making the first prediction, but after a month of optimizing we found that the tool adjusted for this change and became more accurate when we ran a prediction a month later.

Free access for the next 48 hours, limited to PRO afterwards

We’d like everybody to have an opportunity to try the tool and provide feedback. After 48 hours, the tool will only be accessible for SEOmoz PRO members. Please give it a try, and let us know what you think using the Feedback link the left side of the screen.

Try Out Future Rank

How does FutureRank work?

It may seem like we’re using a time machine to make these predictions, and our design team had a little fun with that idea. While I’d love to say that SEOmoz has harnessed the power of space and time, it’s actually not as complicated as you might think—it just requires a lot of data - no flux capacitor required.

Between crawling the web to create our Linkscape index and monitoring aggregate performance data for tens of thousands of websites, SEOmoz collects a large set of data on the web’s link graph, ranks, traffic and the composition of a wide range of SERPs. During a brainstorm session on how we could use this valuable data to create new tools for our PRO members, Cyrus from the customer team, jokingly suggested we create a tool to predict the future rank of a given web page.

While most of us chuckled at the idea, a few of our engineers began looking at our data and creating some simple prediction models. Within a few weeks they had developed an internal alpha tool that was moderately accurate and after a few months of tuning to the prediction model to real results we thought it was time to release the beta to the Moz community.

In short, our prediction model is based on analyzing the prior ranks of both your page, and the other pages in the SERP, the Moz metrics numbers over time (Domain Authority, Page Authority, MozRank, MozTrust), and machine learning models of the search engine’s ranking factors. In the next few weeks we’ll post a more detailed explaining more details about the prediction model. 

Beta limitations

  • The tool currently requires you to enter the approximate current rank of the page (in a future revision we’ll do this automatically).
  • Only works for Google US search at the moment (we hope add other locales soon)
  • As mentioned previously, the predictions are only accurate if you continue SEO activities at the same level of effort (regardless of if that’s low, medium, or high).
  • The accuracy varies by the which keyword you choose to analyze the more common the keyword the more accurate the prediction, but the tool works surpassingly well with long tail keywords as well.

Does rank matter anymore?

Many have been discussing the merits of monitoring rank given how much it varies by search, their geolocation, and the influence of the social signals. Given this variance amongst users, the more valuable performance indicators might be traffic or an average rank among a wide range of searchers/locations. However, until a robust method exists to measure average rank across all your keywords, we believe rank is still worthwhile performance indicator (so long as you’re also measuring the traffic you receive from said keywords).

We’d love your feedback

Please try the tool out and let us know what you think of the tool in the comments below!

Try Out Future Rank

Do you like this post? Yes No

Read Original: http://feedproxy.google.com/~r/seomoz/~3/OGep5ZdhZo4/introducing-futurerank-beta

/// Posted by Alexandre Brabant on Thursday, March 31, 2011

Google Japan Forgoes April Fools For 2011

From: searchenginewatch.com

It’s not a joke. Google Japan is postponing April Fool’s Day until next year.

doodle4googleJP.pngWhile recognizing that April 1 is a day many fans of Google look forward to, Google Japan has decided not to go forward with the typical barrage of hoaxes and pranks this year.

Out of respect for the victims of the March 11 earthquake and tsunami, the Google Japan home page for will display Google Doodles of the 2009 Doodle 4 Google finalists. There were 30 finalists in 2009. Each are represented today.

Doodle 4 Google allows school-aged children to draw Google themed Google logos for the chance to be awarded the honor of having their winning Doodle on the Google home page for a day. The theme for Doodle 4 Google Japan 2009 was “My Japan.”

The children created original artwork of what the land of the rising sun meant to them. While the winners were not supposed to be posted more than once, Google Japan felt it was important to come together for their country.

Read Original: http://feeds.searchenginewatch.com/~r/sewblog/~3/YKoR8lUsaFc/110401-023937

/// Posted by Alexandre Brabant on Thursday, March 31, 2011

Introducing white space links

From: google.com

WhitespaceThe challenge of monetizing the web is a tricky one, but a new venture launched right here and right now is out to solve that problem.


It’s called whItespAcelInks.├é┬á


There’s all this unused white space on the web. Spaces in between paragraphs or links. Wasted.


Consumers are tired of being overwhelmed by ads and by pages that are stuffed to the gills with ads. What if the ads were invisible? What if we could insert links into the white spaces, links you didn’t have to see but could still be clicked on? What if those ads were carefully targeted, location-based and mobile?

IT WORKS FOR LINKS, TOO: http://www.squidoo.com/seth

This is even better than permission marketing. It’s invisible marketing.


In one fell swoop (does anything ever happen in two fell swoops?) we can double or triple the ad inventory of any website! And there’s no need for complicated creative, because, after all, the links are invisible.

Some highlights from the funding plan:

  • We will track every user, protecting privacy by never talking about the fact that we’re doing it.
  • We will create persistent browser tools that permit us to generate whItespAcelInks revenue even when you’re not online.
  • There will be no push back from regulators because the links are invisible.
  • Will there be Android? Yes. There will.
  • An iPad app? I can’t believe you even need to ask. In fact, the iPad app will be so appy that people will pay for it by subscription.


First round funding, announced today, is $11 million. We wanted to keep it modest and prove ourselves in the marketplace. The biggest challenge for us going forward is that the service only runs one day a year.

Read Original: http://feedproxy.google.com/~r/typepad/sethsmainblog/~3/wVyXKknkra4/introducing-white-space-links.html

From: searchenginewatch.com

Although it was targeted toward sites lacking good content, many quality sites were also affected by Google’s Panda update last February. Now, over a month later, it appears Google has a reinclusion process in place.

Using a process similar to Google’s site verification process, site owners will be able to be automatically re-included into the Google Search Results (SERP) by downloading a text file from their Webmaster Tools account. Google’s crawlers will then return to those sites hit by Panda, consume the Bamboo, and reverse the effects the hungry Panda update caused.

Here is an image of Bamboo in the wild:


Click to read the rest of this post…

Read Original: http://feeds.searchenginewatch.com/~r/sewblog/~3/fcVy03zlZTQ/110401-000100

From: google.com

Posted by Aaron Wheeler

 We all know that, at first, it can be really difficult to decide what the most valuable link metrics are and when to use them. Last week, Rand outlined and defined a variety of metrics that are used to assess the respective values of domains, pages, and links between them.  This week, he’s back with the stunning conclusion: how to actually use these link metrics in your research and how to choose which metrics to use for given research situations. If you were ever confused about when you should be using PageRank and when you should be using mozRank, fret no longer!

<iframe class=“embeddedvideo” width=“600” height=“337” src=“http://seomoz-cdn.wistia.com/flash/embed_player_v1.1.swf” name=“wistia_326621” type=“application/x-shockwave-flash”></iframe>


Video Transcription

Howdy, SEOmoz fans! Welcome to another edition of Whiteboard Friday. Today the exciting conclusion, Part 2 of 2, on which link metrics to use. So, last week we discussed in depth a ton of the link metrics that are available, what they mean, what they do, how you can interpret them. Today I want to walk through some of the specific tasks and projects that you are going to be involved in when you are doing SEO kinds of things and which metrics can help you to accomplish those specific tasks.

First up, let’s say I am doing a high level SERPs analysis, something like the keyword difficulty tool output report where it is trying to give me a sense of who is in the top 10, who is in the top 20. Why are they ranking there? Is it because they are an exact match domain? Do they have a lot of good anchor text? Do they have a ton of links into them? Is it because their domain is important or their page is important? We can look at a few key metrics. I like looking at page authority, which is that aggregate of all the mozMetrics and domain authority and then maybe the number of linking roots and C-blocks just to give me a rough idea of kind of what I am dealing with. That high level SERPs analysis is great when I am doing like a keyword difficulty report trying to determine which keywords to go after, whether it is roughly possible for me to be ranking in those sectors.

If I want to do some link quality analysis, so I am looking particularly at a link and trying to determine is this link helping me to rank? Is it potentially hurting me? If I am looking maybe at a client’s website, say I was doing consulting or I am a new SEO in an in-house position and I am trying to analyze whether some links that were built previously are questionable or not, there are some really good ways to do that. One of my favorites is looking at PageRank versus mozRank and mozTrust.

Normally, what you should see is that PageRank and mozRank are pretty close. If PageRank is a 3 and mozRank is like a 4.5, it might be okay. It’s a little on the border. If is a 3 and a 3.5, oh, that’s, you know, that’s perfectly fine. That’s normal. We should expect that. If, however, I am looking at like a 3 and a mozRank is like a 5.8, something is fishy, right? Clearly, I mean, Google probably knows about more links than SEOmoz does and mozRank, boy, for it to be that high and PageRank to be that low, something might be up. Something might be going on where this site is selling links, Google has caught them, they are doing something manipulative. This could be a problem. Then I also like comparing mozTrust, because a lot of times, you won’t see PR scores, especially for a lot of new sites and pages. Google hasn’t gotten the data there, or they have an updated PR, but that site has built a lot of links in the meantime. By the way, you do want to be careful of that too when you are comparing PR and MR. But mozRank and mozTrust, if I see like a 5.8 and a 7.2, this is probably a phenomenal link. If I see a 5.8 and a 2.2, that’s really, that’s a bad sign. That usually means that this page, this site or this page has gotten a lot of links, but from a lot of very questionable sources. Otherwise, their mozTrust should be quite a bit higher.

So, those types of analyses along with looking at not just the number of links but the number of external versus internal links, if it’s a lot of internal links, maybe that is boosting up the ranking, but it will be easier to overcome than a high number of external links and followed/no-followed. If it is a lot of no-followed links coming to the site, oh that is a different story than if all the links are followed.

Now, if I am looking at outreach and popularity, I am trying to say, how popular is this blog? How important is this news website? How big and popular on the Web do I think this forum is or Q&A site or community? Then, I want to be looking at some of those high level metrics, but I might want to dive sort of one step deeper and look at, yes, domain authority. I really care about domain metrics here, right? Not individual pages on those sites. So, I am looking at Domain mozRank and Domain mozTrust, which are the same thing as mozRank and mozTrust but on the domain wide level, and then I might care a lot about the linking roots and C-blocks, because that tells me a raw popularity count. How many people on the Web have referenced this guy compared to other people?

Now, if I am looking and trying to sort by the most helpful links to raise my ranking, say I am analyzing a set of 50 blogs and I want to decide, who am I going to guest blog for first? Who do I really think is going to be providing that value? Or I have the opportunity to sponsor or speak at a conference or contribute in some way, and I know that I can contribute the content or whatever I need to, to get those links. I really care a little bit less about the metrics and a few about these big three questions. So, I would ask you before you look at the metrics to ask yourselves these three questions, particularly if you are doing that sort of detail level analysis.

Number one, how well does that page or that site rank? If you search for a few keywords that are in the title tag of this particular page or the homepage of the site and it does not come up in the number one or number two positions, that might not be a good sign. If you search for four or five keywords that compose a phrase in the title and it is still not coming up, something is seriously wrong there. There might be some issue with that site in Google.

How relevant and useful is it? Is this site going to send actual traffic? Was the link editorially given? Is it a true citation that represents an endorsement from one site, one page to another? If that is not the case, you might be in trouble in the future. Even if Google hasn’t caught it yet, Bing hasn’t caught it yet, in the future, that might be causing problems. It is just not worth it. Go spend your time on other links that are editorial, sincere citations.

Do the sites and pages it links to rank well? This is a great way to analyze directories or link lists or those kinds of things and say, oh, this looks highly relevant. It is a pretty good site. If the pages that it is linking to don’t rank well for their keywords, that’s a bad sign. If a few of them don’t, okay maybe, you know, everybody links to a few bad apples. But if a lot of them are not ranking well, something is going on there, right?

Next, I might look at some metrics like mozRank versus PageRank as we did above, mozRank versus mozTrust, the number of links and linking root domains just to get a sense of these. But those three questions, more so than any metric, are going to really answer the question of how helpful will this particular page or site be in raising my rankings if I get a link from them. Next, second to last here, is sorting of links. So if I want to do a rough or a raw sort, I have a bunch of links that I exported from Google, that I exported from a tool that ran that analyzed a bunch of pages and figured out whether there was usefulness. Maybe I used the – in SEOmoz Labs there is that great tool to help me find all the search queries that I could use to find potential links. I think it is the, what is that called? I think it is the Link Acquisition Assistant. So, the Link Acquisition Assistant might export a bunch of raw lists of pages, and if I want to do some just raw sorting to get a general sense of importance before I start asking these questions, PA/DA are really good for that and so is number of linking roots. So inside the web app, you will see a lot of these. We tend to show at least those three metrics on most everything so you can do a rough sort.

Finally, last but not least, if I am doing a deep SERPs analysis, where I really want to know why does this particular page, why does this particular site rank where it does? Why is this 3 and this 2 and this 4? I want every metric I can get my hands on. The reason is because when you analyze these things all together in Excel, you can see weak points, strong points. You can get a sense of what Google is using or Bing is using in that particular grouping or algorithmic result to try to determine who should rank higher and lower, and that will give you a great sense of what you might need to do to accomplish those rankings.

All right everyone, I hope that this two part Whiteboard Friday extravaganza has been great for you. I look forward to the comments on the blog. Take care.

Video transcription by SpeechPad.com

Do you like this post? Yes No

Read Original: http://feedproxy.google.com/~r/seomoz/~3/ND0Di6czr0A/which-link-metrics-should-i-use-part-2-of-2-whiteboard-friday

From: searchenginewatch.com

Here’s a roundup of today’s other search news and headlines from around the web, sorted by category.

Click to read the rest of this post…

Read Original: http://feeds.searchenginewatch.com/~r/sewblog/~3/Q1n2dBPpAI0/110331-170000

/// Posted by Alexandre Brabant on Thursday, March 31, 2011

Google +1: Not Really Social, All About Business

From: searchenginewatch.com

One hopes the new tagging element rolled out by Google this week is really not their interpretation of social networking and just another attempt to get its users to do some of their heavy lifting. No doubt the popularity of Facebook’s Like button and the ever present Tweet This buttons on most sites’ content pages these days motivated Google, but the limitations of how it is passed around shows the hole in Google’s social networking efforts.

Google +1 is a marketing ploy more than a true social sharing of information. Google has said it may have impact on the organic rankings, but in reality it is the push on the advertising side that I think Google is more concerned.

Click to read the rest of this post…

Read Original: http://feeds.searchenginewatch.com/~r/sewblog/~3/yD1FncGH_AA/110331-162431

/// Posted by Alexandre Brabant on Thursday, March 31, 2011

Bing Tests New Homepage Layout

From: searchenginewatch.com

Bing’s homepage may be getting a new look—and the new layout is a bit reminiscent of Google’s current homepage.

Rather than having the vertical links to Images, Videos, etc. at left on top of their background picture, in this test Bing has moved those links to the top of the screen and arranged them horizontally (but thankfully doesn’t follow Google’s lead with a “More” pulldown).

Click to read the rest of this post…

Read Original: http://feeds.searchenginewatch.com/~r/sewblog/~3/2e-DYPk_lUI/110331-161024

From: searchenginewatch.com

Microsoft today filed a formal complaint in Brussels accusing Google of curating a monopoly of the search market and unfairly promoting their own products. The chief concerns from Microsoft is the monopolization of YouTube indexing, the Android market, and Google advertising.

Click to read the rest of this post…

Read Original: http://feeds.searchenginewatch.com/~r/sewblog/~3/_wwOw0lsVhQ/110331-144017

From: searchenginewatch.com

Google settled its case with the FTC over the forced joining to Buzz for a brief period after its launch last year, they now have to have users sign up for the product.

Google “will have to “obtain express affirmative consent” from users before sharing with any third parties. That means sticking information in front of user’s face telling him or her what information will be shared, who it will go to, and what the purpose of the sharing is. The order says this disclosure has to be done in addition to any written privacy policy, end user license agreement, or “terms of use” page. That’s a significant departure from the industry norm,” PaidContent noted.

Click to read the rest of this post…

Read Original: http://feeds.searchenginewatch.com/~r/sewblog/~3/RBZC557T_CQ/110331-035439

From: searchenginewatch.com

Apart from Plus1, Google has been busy this week rolling out a number of feature additions including contextually targeted content advertising and a refresher of language targeting were two that should be noted.

Many of you use contextual targeting on the Google Display Network to reach potential customers as they read web content directly related to your products or services. To date, you’ve been able to do this by specifying keywords that work together to show your ads on relevant webpages. This week, you’ll also be able to specify topics to contextually target your ads to pages in the Google Display Network. With this additional contextual targeting option, you’ll be able to select from over 1,750 topics and sub-topics to target your ads, helping you quickly reach a broad audience across the web that’s actively engaged with content related to your business,” the Adwords blog reported.

Click to read the rest of this post…

Read Original: http://feeds.searchenginewatch.com/~r/sewblog/~3/s8FFLBTjvsI/110331-021929

From: google.com

“How’s the wine?”

You really can’t answer that question out of context. Compared to what? Compared to a hundred dollar bottle? Not so good. Compared to any other $12 bottle… great!

“How was the hotel?”

“How’s the service at the post office?”

In just about all the decisions we make, we consider the price. A shipper doesn’t expect the same level of service quality from a first class letter delivery than it does from an overnight international courier service. Of course not.

And yet…

A quick analysis of the top 100 titles on Amazon (movies, books, music, doesn’t matter what) shows zero correlation between the price and the reviews. (I didn’t do the math, but you’re welcome to… might be a good science fair entry). Try to imagine a similar disconnect if the subject was cars or clothing…

For any other good or service, the value of a free alternative that was any good would be infinite—free airplane tickets, free dinners at the cafe… When it comes to content, though, we rarely compare the experience other content at a similar price. We compare it to perfect.

People walking out of the afternoon bargain matinee at the movies don’t cut the film any slack because it was half price. Critics piling on to a music video on YouTube never mention the fact that HEY IT WAS FREE. There is no thrift store for content. Sure, we can get an old movie for ninety-nine cents, but if we hate it, it doesn’t matter how cheap it was. If we’re going to spend time, apparently, it better be perfect, the best there ever was, regardless of price.

This isn’t true for cars, potato chips, air travel, worker’s comp insurance…

Consider people walking out of a concert where tickets might be being scalped for as much as $1,000. That’s $40 or more for each song played—are they considering the price when they’re evaluating the experience? There’s a lot of nuance here… I’m certainly not arguing that expensive is always better.

In fact, I do think it’s probably true that a low price increases the negative feedback. That’s because a low price exposes the work to individuals that might not be raving fans.

Free is a valid marketing strategy. In fact it’s almost impossible for an idea to have mass impact without some sort of free (TV, radio, webpages, online videos… they’re all free). At the same time, it’s not clear to me that cheaper content outperforms expensive in many areas. As the marginal cost of delivering content drops to zero (all digital content meets this definition), I think there are valid marketing reasons to do the opposite of what economists expect.

Free gets you mass. Free, though, isn’t always the price that will help you achieve your goals.

Price is often a signalling mechanism, and perhaps nowhere more than in the area of content. Free enables your idea to spread, price, on the other hand, signals individuals and often ends up putting your idea in the right place. Mass shouldn’t always be the goal. Impact may matter more.

Read Original: http://feedproxy.google.com/~r/typepad/sethsmainblog/~3/1-thq90VpnA/compared-to-perfect-the-irrelevance-of-price-for-content.html

/// Posted by Alexandre Brabant on Wednesday, March 30, 2011

Gmail Plans for More Intuitive Ads

From: searchenginewatch.com

Google plans for higher quality Gmail ads; more targeted, useful, and tailored towards desires.

Although Gmail is said to be implementing changes in the ad’s already, the personalization-based ads could still be a month out.

Click to read the rest of this post…

Read Original: http://feeds.searchenginewatch.com/~r/sewblog/~3/_qfDG0bdNx8/110330-190000

From: searchenginewatch.com

Google Buzz launched last year, it quickly met security issues. Those issues, in turn, led to millions of dollars in fines, a downward PR spiral and a contributed significantly to the general distrust of Google, especially in Europe. It also led Google to immediately change how sharing worked in its products.

Thumbnail image for google buzz logo.JPGIn a blog post today, Google publicly apologized again for “the mistakes [they] made with Buzz,” describing the privacy incidents surrounding at launch as falling short of Google’s “usual standards for transparency and user control.”

Click to read the rest of this post…

Read Original: http://feeds.searchenginewatch.com/~r/sewblog/~3/pmB8tS1LhiA/110330-181600

From: searchenginewatch.com

Here’s a roundup of today’s other search news and headlines from around the web, sorted by category.

Click to read the rest of this post…

Read Original: http://feeds.searchenginewatch.com/~r/sewblog/~3/papc6yF1w3s/110330-170000

Page 1 of 20 pages  1 2 3 >  Last »