Features Prices
News 0
Latest News See All

Temporarily unavailable. Please come back later.

See All
Webinars 0
Upcoming Webinars See All
Upcoming Webinars

Sorry, we could not find any upcoming webinars.

See recorded webinars
Blog 0
Recent Posts See All

Temporarily unavailable. Please come back later.

See All
Chris Makara

How to Find Twitter Influencers that Will Share Your Content

Chris Makara

With content creation, one of the toughest parts of it is the actual promotion of it. Sure, you can post it on various social channels and hope you are reaching the right audience that will want to share it. In fact, LinkedIn Groups are a great channel to utilize content for lead generation.

The trouble with most social networks is the difficulty of finding true influencers. Twitter is no exception.

There are many social tools that help you understand the social influence of a person. Typically, these tools look at audience size (reach), Klout scores, and other social activity. However, few of them if any allow for easy insight into what content they are sharing that relates to your business on a scalable level.

For example, how do you find the most active social media influencers on Twitter that will share your content?

How to Identify Twitter Influencers

Before we actually find them, we first need to identify the different ways we can find users we want to target. There are three ways I like to identify influential Twitter users. These methods are identifying:

  • Who are the most active users sharing my competitors blog content?
  • Who are the most active users sharing the top 100 Google search results for my topic?
  • Who are the most active users sharing a certain hashtag for a topic?

You may have different criteria which can be easily adapted into the various ways I explain how to actually get the data we need to identify the influencers.

Most people simply look at total number of retweets for a particular link. But how does that help you find out who is actually tweeting about it?

Sure, you could click on the number listed in the Twitter share icon for the blog post and then see a list of recent people who have tweeted this link. Unfortunately, depending on the age of the link you are trying to get data for, Twitter won't give you all the historical data. Luckily, Topsy can provide us with this data.

More than likely you have used Topsy for various reasons and no doubt you have seen the list of users who tweeted a link. But only if there were a way to easy download this data? Not only download the data for a single link, but a way to download data for countless links.

Well there is...but we will get to that a bit later.

First, we need gather a list of URLs we want the data for:

I typically like to find ways to get the info I need through free tools and resources and use a paid solution when it makes sense. For the purpose of this blog post I'll cover three free sources and two paid service to get the data we need to find these influencers. The five potential sources for data collection are:

  • MozBar
  • Xenu Link Sleuth
  • BuzzSumo
  • Scrapebox
  • SEMrush

Chances are that by reading this blog you already have a subscription to SEMrush, so you will have access to almost all the methods I'll go over and can choose with method works the best for you.

In fact, you can use a combination of these methods to pull some great results.

While you more than likely use these tools for various purposes, I'll reveal how you can use them to gather the data we need to find true influential Twitter users. We’ll then pair this data to a nifty script that will allow us to easily scrape the results. If you are not familiar with web scraping, it is basically a method to extract data or information from a website.

With that being said, let’s get started.


The first way we can get data is with MozBar. This free toolbar has a nice "Export SERP to CSV" function we will use. In order to use this, you will need to run a Google search in the web browser the toolbar is installed in.

There are a number of different searches you can run to find a list of sites/URLs that will give you the data you are looking for. For example, you can use some of these queries:

  • site:www.SITE.com/blog (this will give you a list of pages listed containing that URL)
  • intext:"TOPIC" intitle:"TOPIC" inurl:blog (this will give you a list of pages listed containing the topic in the page title and page content that appear on a domain with “blog” in the URL)

By default, Google will display 10 search results. With this default, you will only get 10 results in your CSV download. However, with a quick modification to your search URL, we can get 100 results.

Simply add "&num=100" to the end of the URL in your browser address bar and hit enter.

Once the new page loads, you will see 100 results. Now you can click the "Export SERP to CSV" button in the MozBar.

What's nice about the export is that it pulls all the metrics you find with OpenSiteExplorer. Opening the file in Excel, you will see this data and can sort and clean up the results if you are looking for certain criteria to find influencers.

Google Search

Xenu Link Sleuth

One of the pitfalls of using the MozBar is that you can only get 100 results. If you were trying to get a list of all blog posts from a website, chances are you will be missing out of getting all of their URLs.

However, by using Xenu Link Sleuth, we can essentially scrape a full website. This can be your website, a competitor, or even a large content site. Additionally, you can target a subsection of a site such as a blog.

You can download and install the desktop application here. Disregard the he shady appearance of the site as it has been a "go to" app of the SEO community for many years.

Once installed, open up the program and type in the URL you would like to scrape. If you want to scrape just the blog section, simply put in the URL to the blog homepage. When ready, run Xenu Link Sleuth and it will scrape the site. Upon completion, click "File > Export Tab Separated File" and save the file.

Open the file in Excel and you will see quite a bit of data has been scraped. I recommend creating a table out of the data by clicking "Ctrl + T". This will allow you to easily filter the data. The first thing I do is filter column D (Type) to show only "text/HTML" results. This will clean up the data to show actual URLs of the site.

Depending on the size of the site, you can sort the remaining URLs to exclude any URLs you are not interested in finding influencers for. Typically, I try to filter just for the section of the blog and exclude category, author or tag pages. Most sites will be slightly different than another, but once you do a few times it becomes easy to quickly clean up the URLs.

Xenu Link Sleuth

While Xenu Link Sleuth is an excellent application, one downside of it is that it can take quite a while to scrape a site. However, it can easily run in the background.


A more recent application that can also be used to find influencer URLs is BuzzSumo. I've been using their site for several months for various purposes.

BuzzSumo makes it easy to get a list of URLs to export as a CSV file for Excel. Simply type in the name of the URL you are interested in finding influencers of, the topic of the blog posts you want to find influencers of, or use the advanced search to come up with your own custom search.

Once your search has been performed, you will see a list of all URLs that meet the criteria. The results will show you the total shares for each post and break it down by each network.

If you have too many results, you can revise your search to show any or all of the he following content types; articles, infographics, guest posts, giveaways, interviews, or videos. Additionally, you can change the date range as well.

Once you are satisfied with the search results, you can easily export the results to Excel.


A benefit of using BuzzSumo is that social shares for each URL as well as publication date are included. When you have the data in Excel, you can filter it to focus on fresher content as well as content that has many social shares.

One downside of using BuzzSumo is that the results will only go as far back as 6 months. In some cases, this might not matter in that influencers of topics come and go over time so you might be interested in more recent results.


I'm going to include Scrapebox here as an option. While this program is not free, it does have many uses. Even though Scrapebox has more of a "blackhat" reputation, it does have a variety of "white hat" uses. Everything you ever wanted to know about Scrapebox can be found here.

For the purpose of this post, I am going to focus on a free add-on that will scrape XML Sitemaps. Once you have Scrapebox open, click "Addons > Show Available Addons" and then select "Scrapebox Sitemap Scraper" and then click to install the addon.

This is useful if you want to target the influencers of a competitor's blog or even a large content site in your niche. You would simply need to find their XML sitemap and plug it into the sitemap scraper.

Depending on how your competitor's site is built, they may or may not have an XML sitemap for their blog. It might be included in their main XML sitemap. Most commonly, sitemap's are found at "www.SITE.com/sitemap.xml". If you can't easily find the sitemap, occasionally you can find it through the robots.txt file for the domain.

In Scrapebox, you will need to click on "Addons > Scrapebox Sitemap Scraper". On the following screen, you will need to click on the "Import/Export" button and choose how you want to import the sitemap URL. Next, click on "Start" and watch Scrapebox work its magic. Once completed, simply click on the "Import/Export" button and export the results.


A benefit of using Scrapebox is that you can import multiple XML links at once and scrape them all at once. You can get the URLs for thousands of pages in just a few minutes. This is my preferred method of scraping when I know the URL of the sitemap.


The final tool we can use to gather URLs to identify social sharers on Twitter is SEMRush. Using SEMRush, it is very easy to get a list of URLs for a website that are organically ranked in the top 20 search results.

Simply go here and type in the URL you are interested into the search bar and click "search". SEMRush will then display a list of all URLs for that domain that rank within the top 20 organic search results. Simply click the "Export" link and save the results.


A benefit to using SEMRush is that the exported data will include search volume data, trend, and date of the last update. If you are interested in filtering your list of URLs by most recently updated, SEMRush will provide insight into this.

Installing iMacros

Up next we need to install iMacros which will allow us to scrape the social influencer data from Topsy. iMacros is a browser extension that works in both FireFox and Google Chrome.

While I use Chrome for just about everything, I prefer using iMacros in FireFox. Mainly because I like to let Firefox run iMacros in the background while I use Chrome to do other things as the macro runs.

You can get iMacros for Chrome here and for FireFox here.

Once installed, we need to load up the script that will allow us to take our list of URLs and scrape the associated data from Topsy and save it into a CSV file. Here is how to do it:

  • Open the iMacros extension in the browser of your choice (I prefer the FireFox version to the Chrome – both should work)
  • Grab this code and copy it to your clipboard
  • Click the "Record" tab and press "Record"
  • Click "Stop"
  • Click the "Manage" tab and click "Edit Macro"
  • Delete all text that appears in the editor and paste the text from your clipboard
  • Edit row 11 to match the location of your iMacros installation
  • Click "Save & Close"
    • //imacros-js:showsteps no // in this version // added comments // added filter for records to skip // Declaration of main options, to be changed before execution {var timeStamp = new Date().getTime(), iMacrosFolder = "C:\\Users\\Chris\\Documents\\iMacros\\Topsy", // iMacros Install folder urlPrefix = "http://topsy.com/trackback?url=" inputFile = "urls", // outputFile = "TopsyData", // File with the details of every item stop = false, // flag to be able to use the stop button during macro execution loop = 1; // 2 to ignore the header line at the beginning// set a timestamp value that will be used for all files saved iimPlay("code: set !extract {{!NOW:yyyymmdd_hhnnss}}"); timeStamp = iimGetExtract();// set the header of each of the main datafilesheaderFile="URL";// create a header for each and every of the csv files to be saved.iimSet("header",headerFile); iimPlay("code: set !extract {{header}}\nSAVEAS TYPE=EXTRACT FOLDER="+iMacrosFolder+" FILE="+outputFile+"_"+timeStamp+".csv");}function loadInputFile() {macro = "CODE:"+"\n";macro += "SET !TIMEOUT_PAGE 15"+"\n"; macro += "SET !TIMEOUT_STEP 2"+"\n"; macro += "SET !ERRORIGNORE NO"+"\n"; macro += "SET !DATASOURCE "+iMacrosFolder+"\\"+inputFile+".csv"+"\n"; macro += "SET !DATASOURCE_COLUMNS 1"+"\n"; macro += "SET !DATASOURCE_LINE "+loop+"\n"; macro += "SET !EXTRACT {{!COL1}}"+"\n";retcode = iimPlay(macro); stop = false; if (retcode < 0) // an error has occured { err iimGetLastError(); result = "Error: "+retcode+" "+errText; stop=true; return result; } else{ err iimGetLastError(); extracted = iimGetExtract().replace(/\s+/g, " ");myExtract=extracted.split("[EXTRACT]"); if (myExtract[0]=='END OF FILE') { stop=true; }

      } macro = "CODE:"+"\n"; macro += "SET !TIMEOUT_PAGE 15"+"\n"; macro += "SET !TIMEOUT_STEP 2"+"\n"; macro += "URL GOTO="+urlPrefix+myExtract[0]+"\n";

      retcode = iimPlay(macro); p=1; while (p<11){

      i=1; while (i<11) { extractURL(i); extracted = iimGetExtract().replace(/\s+/g, " "); myExtract=extracted.split("[EXTRACT]"); // alert(myExtract); if (myExtract[0]!= "#EANF#") { i++; iimSet("url",myExtract[0]); iimPlay("code: set !extract {{url}}\nSAVEAS TYPE=EXTRACT FOLDER="+iMacrosFolder+" FILE="+outputFile+"_"+timeStamp+".csv"); } else { i=12; p=12; } } p++; loadNextPage(); }


      function loadNextPage() { macro = "CODE:"+"\n"; macro += "SET !TIMEOUT_PAGE 15"+"\n"; macro += "SET !TIMEOUT_STEP 2"+"\n"; // macro += "SET !VAR1 EVAL('var randomNumber=Math.floor(Math.random()*3 + 10); randomNumber;')"+"\n"; // macro += "WAIT SECONDS={{!VAR1}}"+"\n"; macro += "TAG POS=1 TYPE=A ATTR=TXT:Next"+"\n"; // alert('next page'); retcode = iimPlay(macro);


      function extractURL(i) { macro = "CODE:"+"\n"; macro += "TAG POS="+i+" TYPE=A ATTR=CLASS:pull-left EXTRACT=HREF"+"\n"; retcode = iimPlay(macro); }

      main: { while(!stop) { loadInputFile(); loop++; if (retcode == -101){break main;} // abort script if user presses Stop button }


Your macro will be saved as "#Current.iim", you need to right click on the name and select "Rename" to rename it as something else. By default, anytime you click "Record", the macro will be saved as "#Current.iim". Therefore, it is important that you rename it to prevent it from accidentally being overwritten. For this macro to work, we need to be sure it has a .js extension, so a name like “Topsy.js” would work.


Before we actually run the iMacro, we need to tell iMacros what URLs we want to perform the macro on.

Let's Clean Up Our List of URLs

The way the iMacros script is written is that it needs a CSV file of URLs to perform the scraping on. Depending on the method(s) you used to gather your URLs, you might find yourself with hundreds or even thousands of URLs. Therefore, you may want to slim down the list. A few ways I have slimmed down the list is:

  • Sorting by published date to show the most recent 100 posts. On some blog URLs, the date is included in the URL, so sorting in Excel is easy. However, some sites won't have the date in the URL. In this case, you can use Google Docs and the "ImportXML" function to pull in blog dates on the post.
  • The SEOTools Excel plugin by Niels Bosma has the ability to import the number of Tweets for a URL. You can easily use this function to find out which posts have the most Tweets and focus on those.
  • Manually sifting through Excel and looking to remove pages I am not interested in. Pages such as author pages, tag pages, pagination pages, etc.

I don't recommend having a large list of URLs unless there is minimal social activity for those URLs. If you are using URLs from a large site with a large following, you can get a good idea of the most active users with 100-200 URLs.

Once we have a list of URLs, we need to get them cleaned up so that we can start scraping some real actionable data. Depending on what method you used to gather your URLs and filter them, the Excel file will contain various columns of data.

We only need the column that contains the URLs we need to scrape. So delete all other columns leaving the URLs column in column A. Be sure to delete the row heading for the URL column.

When ready, you will need to save your Excel file as "urls.csv". You will need to save it to your iMacros installation folder. For me, my folder is located at "C:\Users\Chris\Documents\iMacros\". Since I use iMacros for other things, I went ahead and created a Topsy folder. So my full path to save is "C:\Users\Chris\Documents\iMacros\Topsy”. This will be the same location that you entered on line 11 when setting up your macro.

Getting the Data to Identify Twitter Influencers

Now we are all set up and ready to go. All you need to do is click on the "Play" button in iMacros and the script will go to work. Essentially, the way it works is that it will look at the first URL in your "urls.csv" file and search for it on Topsy. Next, it extracts the ID's of the Twitter users who retweeted it. The script will move to the next page of the results until all the Twitter ID's have been found and saves them into your iMacros folder location in a single CSV file.

Once the first URL from your "urls.csv" file is completed, the script will move to the next URL in your list and repeat the same process. All data is saved into the same CSV. This process will run until all URLs in your "urls.csv" file have been run.

Depending on how many retweets there are for each URL can impact how long it will take for the process to complete. It can take anywhere from a few minutes to a few hours. Either way, it is a lot quicker than doing it by hand!

Identifying the Most Active Twitter Users

Once the script has completed running, you can open up the CSV file that was created. It will be located in your iMacros folder you designated during setup and named something like "TopsyData_20140731_153434.csv".

Within the CSV you will see a list of Twitter URLs. Let's create a table (ctrl+T) and then make a pivot table out of the data by clicking on "Summarize with Pivot Table" under the "Design" ribbon in Excel. Once created, drag the "URL" field to both the "Rows" and "Values" sections to create the pivot table data.

Next, within the pivot table, click on drop down for the row labels and select "Value Filters > Top 10." You can change the 10 to be whatever number you want and the pivot table will display the top 10 or 20 users with the most tweets for all the URLs we have data for.

Pivot Table

Simply visit each of the profiles listed to determine if the user fits the criteria you are looking for in a Twitter influencer and begin to build a relationship with them. You can follow them, tweet them, share their tweets, etc.

Bonus Tip - If you have the Niels Bosma SEO Tools Excel plugin installed, you can create a column to the right of your top 10 to show the follower count for each user. Simply use the formula:

=RIGHT(XPathOnUrl(A2,"//li[@class='ProfileNav-item ProfileNav-item--followers']"),LEN(XPathOnUrl(A2,"//li[@class='ProfileNav-item ProfileNav-item--followers']"))-10)

Hat tip to Matthew Barby for this formula.

Additionally, If you want to pull nearly all the Twitter information for each user (including follower counts), I have created a handy Twitter Excel spreadsheet.

Go Build New Relationships

Now that you have identified potential influencers, it’s time to start building relationships with them. Naturally, you can follow, retweet, or even message them.

There are countless ways you can use this data to build influencer relationships. In what ways do you plan on using this data?

Be sure to sound off in the comments below and let me know!

Chris Makara is an interactive marketing & digital strategist with 10+ years of experience. He is the founder of Ciked, a digital marketing agency for small businesses. You can connect with him on TwitterLinkedIn, or Google+.

Have a Suggestion?