Analysing data on Facebook Ad Campaigns with Redash

Our previous article was dedicated to collecting data on Facebook ad campaigns. Today we will analyze this data using Redash. As a first step, we need to upload our script to AWS cloud and create a server with AIOHTTP before passing the data to Redash. Let’s improve the script a little bit for this task:

from facebook_business.api import FacebookAdsApi
from facebook_business.adobjects.adaccount import AdAccount
from facebook_business.adobjects.adreportrun import AdReportRun
from facebook_business.adobjects.adsinsights import AdsInsights
from facebook_business.adobjects.adaccountuser import AdAccountUser as AdUser
from facebook_business.exceptions import FacebookRequestError
import time

Redash receives data in JSON format, which we haven’t covered yet. A JSON file is a file that consists of a list of dictionaries. The script will convert it to JSON and pass the data to Redash. We’ll need the variable that stores our access token, app id, app secret and two functions: wait_for_async_job() and get_insights(). The latter function receives account parameters and collects data on ad campaigns. Select the following fields: clicks, impressions, costs and dates.

We need the entire script from our previous article – Collecting Data on Facebook Ad Campaigns

The return_json() function will call the get_insights(), which will append data to insights_lists. Since we may have several ad campaigns, our output can be a list of lists with dictionaries instead of just a list of dictionaries. Create a simple lambda expression that will smooth the output and return our insights_lists_flatten. Now, the script returns a list of dictionaries:

def return_json():
   insights_lists = []
   date_preset = 'last_year'
   for elem in my_accounts:
       elem_insights = get_insights(elem, date_preset)
       insights_lists.append(elem_insights)
   flatten = lambda lst: [item for sublist in lst for item in sublist]
   insights_lists_flatten = flatten(insights_lists)
   return insights_lists_flatten

We also need a AIOHTTP server that will return our output as JSON. Create a new file, import the AIOHTTP library and the get_json() function from the preceding script. Write a simple query handler: the script will receive data from Facebook asynchronously and our asynchronous handler function will sleep until all data is collected and transferred. The function uses json_response to transfer data in json format.

from aiohttp import web
from get_json import return_json
async def handler(request):
   data = return_json()
   return web.json_response(data)

Initialize and run our application.

app = web.Application()
app.add_routes([web.get('/json', handler)])
web.run_app(app)

Now, go to AWS cloud, create a new folder and upload the two scripts via SFTP connection. Check if the needed port is open, click console — network & security — security groups — default.

Run our file from the server. You can check whether it works by accessing it via server IP address specifying 0880 in the route field. Connect to Redash using URL and we’ll get the same table returned by our script:

url: server ip

Having query results, we can write the following query:

select date_start, sum(clicks) as clicks, sum(impressions) as impressions, sum(spend) as spend from query_45
group by date_start

It returns this table, grouped by the date_start column:

We can now plot this data on the chart, let’s see if there is any correlation between ad costs and clicks:

Bingo! Next time, we will show you how to get data on ad campaigns from Vkontakte.

Share
Send
Pin
 229   23 d   BI-tools   data analytics   redash
Popular