Back

Do you want some Prebid?

FUNCORP’s services and advertising infrastructure have recently undergone significant changes. In addition to the Prebid Mobile, we now also support and develop the Prebid Server to work with our apps.

We chose Prebid because we’ve been using its Software Development Kit for a long time and have enough experience and competence in working with it. This expertise allows us to improve and develop the service without compromising its stability. As a result, Prebid has become one of the critical services within our infrastructure.

Before telling you about the decision, the implementation, and future plans, let’s review the Prebid basics.

This article covers not only the technical side of the Prebid Server but also product metrics.

To begin with, let me remind you that FUNCORP develops entertainment services, and the primary monetization method is displaying ads.

To display ads, you need someone to publish those ads in your apps in the first place. There exist multiple advertising networks with whom you can sign a contract and set up an integration to get the ads for your users.

Some of them you can see in the picture below:

0*9dsFtYUPSwy-KGDR.png

Our app supports ads of different formats and sizes. It can look like this:

0*djTfn4MadgzizCGg-1.jpeg

When you open the app, you get ads creatives — ads shown to users through some digital platform (in this case, our mobile apps). To deliver creatives to the user, we send requests to different ad networks, aggregate the received ads on our site and display them.

0*hZdyL5X0H6YWMx7p.jpeg

Previously, the advertising market used the waterfall advertising mediation process (the so-called waterfall auction). Each publisher sets up requests to ad networks with a minimum ad price. These ad networks go through a prioritized list of bids from top to bottom and may return the first bid that satisfies the condition without reaching the most favorable recommendation on the list. The whole process was prolonged because bids were checked one by one (like water flow in a waterfall — hence this comparison).

0*MRdXqIL0FzePQbhE.jpeg

Copyright — https://prebid.org/why-prebid/

An alternative to the “waterfall” is the header bidding auction (HB). Unlike the previous method, requests are sent out to all bidders, and the best bid wins. This approach worked faster and made it possible to find the best possible price.

0*z1UOjxpRAH5cIo2A.jpeg

Prebid, an open-source unified worldwide auction solution, was launched in 2015. Prebid is the product of the joint effort of three major advertising companies — AppNexus, Rubicon Project, and PubMatic. Prebid made it possible to set a standard for working with HB that simply didn’t exist before and create a large community around the product.

Prebid is developing three products — a web product and products for server and mobile platforms:

  • Prebid.js

  • Prebid Server

  • Prebid Mobile

Prebid.js for Header Bidding is a solution for websites, implemented as a Javascript program. This is the first product developed by Prebid, and it is embedded in the website code and allows sending requests to multiple advertising networks. Since this is a client-side solution, Prebid.js is inferior to Prebid Server in speed and scalability.

Prebid Mobile is a mobile SDK (Software Development Kit) that works on the header bidding principle just like the other solutions. It can be used to receive ads directly from or in conjunction with the Prebid Server. This SDK is implemented in the most popular mobile operating systems: iOS and Android.

We will focus on the Prebid Server. With this solution, you send a request to the server, and the server, in turn, requests creatives from all selected partners.

Why do we need our own Prebid server?

Let’s see what options are available for working with Prebid Server:

  1. Renting the Prebid Server There is a comprehensive list of providers on the Prebid website, each offering different terms of a partnership. Until recently, we used the services of one of these companies.

  2. Creating your own Prebid Server. Advertising is the primary source of our income, so we want to have maximum control over the server, the process of receiving ads, and security.

You can use the Prebid Server to flexibly configure your work with partners, and you don’t need to modify the client-side code. If you have the necessary expertise and resources, hosting your own server is more cost-efficient in the long run despite the possible maintenance challenges.

Previously, we were limited to improving and optimizing the Prebid experience on the client-side only, but with our own Prebid Server, we can make various improvements to both parts of the system, increasing their effectiveness many times over.

An important aspect here is that now we provide the service infrastructure by ourselves, which means we can ensure the same high standards of support and stability that we apply to all of our other services.

Now, let’s talk about the technical aspects of this solution.

Prebid server architecture

The main components of the Prebid server-side solution are Prebid Server, Prebid Cache Server, and the NoSQL database for data storage.

0*WN1HXu33scj8Z4DW.jpeg

The tasks of each component are as follows:

Prebid Server

Processes requests from the client, send them to selected partner ad networks, and save bids via the Prebid Cache Server to the NoSQL DB (Ads Creative DB).

Prebid Cache Server

Saves results received from the Prebid Server to the NoSQL DB and outputs them on user request.

Ads Creative DB

Stores results based on the configured Time to live (TTL) parameter.

How we deployed our Prebid Server

The deployment and testing of our server were split into several stages:

  • Solution analysis by Prebid
  • AWS deployment
  • Load testing
  • AWS alpha testing
  • Transfer to the data center
  • Beta testing
  • High traffic testing
  • Component tuning

Let’s go through all of them.

We looked at the available Prebid solutions. In short, there are two server-side solutions: Go- and Java-based. We chose the Java implementation because the technology stack used in the Prebid Java Server and Cache was more familiar to us.

Next, we deployed a cluster of multiple servers and caches with a load balancer. We analyzed how a failure of a particular node would affect application performance, checking each node (balancer, server, cache, database, bidders) and conducted a series of load tests for a single machine and cluster. To test the server, we set up services on an independent device to simulate the response of real bidders. It was necessary not to launch a DDOS attack on an advertising network.

We tested the following parameters:

  • The throughput capacity of nodes
  • Server behavior when the connection to bidders, cache, or database is lost.
  • Load on our network
  • The burden on hardware resources, including memory, CPU, and disk space consumption

The only element of the system we cannot control is external bidders. We had to ensure maximum reliability and uptime for the rest of the features because a complete failure of any of the nodes (i.e., if an entire cluster fails) would make it impossible to get creatives.

We implemented a mechanism for fast switching the users from the partner server to our own Prebid server to conduct client testing. During this period, some users were switched to our servers and received ads through them for a few days. We, in turn, collected real-time advertising statistics and data on the performance of both the partner server and our own Prebid server. We analyzed statistics from both servers, and the results were the same, which confirmed the correctness of our configuration.

Then, we had to move the servers from AWS to the data center and run another series of tests with a higher percentage of users. The data acquired from the tests allowed us to forecast the RPC, CPU, etc., load quite well. As a result, the maximum percentage of users in the trial was 35% on Android devices and 45% on iOS devices. The server load reached 35k RPS (Requests Per Second).

Administering ads for the server

To manage the configuration of requests to partner ad networks, we created our own service with a frontend to edit, save and delete stored requests.

A few words about what a stored request is and why you need it:

Technically, we could hardcode all the necessary parameters for receiving ads from partners in the client — banner sizes, ad types and networks themselves. But this would increase the complexity of development and make our work with ad networks less flexible.

To avoid this, the Prebid Server has a stored request functionality. It allows you to save chunks of configurations in the database and use only the identifier of this configuration on the client.

So, when the Prebid Server receives a request with a stored request ID, it requests it via HTTP from the administration service and caches it for the selected time.

Server tuning

To reduce hardware costs, we did component profiling and chose several areas where we could try to improve performance:

  • Selecting the optimal ratio of dispatchers and workers
  • Java version, Prebid Server recently switched to 17
  • GC options
  • Network optimizations

After testing all the hypotheses, we improved performance by ~10%.

Current system view

Our system consists of separate Prebid Server and Prebid Server Cache clusters, an administration system, NoSQL storage for creatives, and long-term storage for winning bids.

0*b_tT2Fsjk8ba8ZhB.jpeg

Long-term storage is necessary if any complaints regarding creatives arise, and further investigation is required. The storage time is configurable.

We constantly collect and analyze various technical metrics from system components to ensure stable operation and timely response to problems. We run health checks and monitor resource consumption, response delays, and the number of different types of errors.

All this data is displayed in real-time in monitoring systems, and automatic alerts are set up for the most critical metrics.

Very few technical improvements are left — we are now completing an audit of system security and finalizing the administration tools to make handling stored requests easier.

Prebid product metrics

To improve the efficiency of our advertising campaigns, it is essential to monitor the Prebid server operation in real-time. Obviously, technical monitoring is crucial in the deployment of such high-load services. In addition to technical monitoring, we also consider it essential to monitor product metrics that indicate the quality of the execution of business processes.

So we’ve defined a set of metrics to monitor the operational quality of components of our own deployed infrastructure and our partners’ infrastructure. We also configured alerts for critical events to rapidly respond to emerging issues.

The selected metrics are classified based on the following attributes:

Degree of integration. Metrics indicate the operational quality:

  • Of our own infrastructure
  • Of partners’ infrastructure

Source of the events:

Criticality:

  • Requiring immediate response and repair in case of problems
  • Requiring no immediate response
  • We notify our partners of any problems detected on our side to eliminate them and ensure the uninterrupted operation of all end-to-end services

Monitoring the quality of individual bidders

A bidder is a provider of advertising. With Prebid, the advertisement is returned in the form of a bid. The bids of different bidders participate in the auction on the server. The winning bid is returned to the publisher. The publisher loads the ad into one of its placements based on the bid. This means that we can indirectly influence an individual bidder’s quality. One way or another, we think it is crucial to integrate monitoring here as well. Firstly, to let a partner know that there is a problem if they don’t already know about it. Secondly, to reconfigure the list of enabled bidders or their settings if issues with a particular bidder affect the performance or other characteristics already implemented in our infrastructure.

Types of available monitoring metrics and what they indicate:

Number of bid requests to the bidder

  • If there are N bidders in Prebid, N requests will be sent to them for each client request (if not otherwise configured). Therefore, the number of requests for bids is proportional to the number of all requests to the server, as well as to the number of bidder adapters enabled on the server
  • This metric reports on the availability of individual bidders, server uptime, and the availability of the actual server configuration

The number of bids returned from bidders

  • The absolute number of bids returned from each individual bidder must be less than the number of requests to that bidder
  • This metric indicates the availability of individual bidders and bids returned by the bidders

Bidder fill rate

  • The bidder fill rate is a metric calculated as a ratio of the absolute of the returned bids to the requested bids. In fact, we get the percentage of requests for ads served by the advertising network. Zero or too low fill rates should alert you. They may indicate an error in one of the adapters

Bidder request errors, the average time to get bids from the bidder

  • Higher delay in receiving bids should also alert you as it may indicate some issues on the partner’s side. The same goes for the increase in the number of errors

The efficiency of requests to the bidder (auction win-rate)

  • Simply put, it’s the percentage of the bidder’s wins in the auction. This metric allows you to determine whether there are any connected bidders that return uncompetitive bids and thus do not bring any benefit to the publisher

Showing ads in the dimension by the bidder on the client

  • The bidder gave an ad to the server or won a certain bid in an auction on the server does not mean that it was actually shown on the client. To display ads on the client, the cache is requested in addition to the server. Therefore, to monitor the cache availability, you must also look at the actual ad impressions on the clients
  • In the dimension of an individual bidder, issues related to showing the creatives may indicate the wrong settings of advertising campaigns

Bidder ad creatives errors, the average time to get bids from the bidder

  • Just as impressions indicate that everything is going well, errors indicate some kind of a problem on the partner’s side or in our infrastructure

Monitoring the quality of the server/cache/admin panel

We faced some limitations when implementing alerting in our internal infrastructure. It’s impossible, for example, to get reliable information on admin panel problems via product metrics. You can figure out indirectly that the connection with the admin panel is lost when the configuration is invalidated. In this case, requests to bidders will just not be sent. This is why we rely primarily on technical alerts, but product metrics can also be useful when conducting investigations.

If the metrics were collected from the server in the previous cases, the main source of events is the client analytics.

Customer bid request rate and fill rate

Rate is the number of requests to the service per unit of time. The fill rate is the percentage of requests served by the advertising partner (relative to their total number).

The server’s total number of bid requests does not change much if seasonality is taken into account and DAU is maintained. The same goes for the fill rate unless individual bidders are added or disabled. In this case, the fill rate can be calculated by the placement in the application, not by the bidder. Metrics indicate connectivity to the server and the correctness of integration with the clients.

Bid request errors, the average time to receive bids from the server

These metrics are calculated similarly to the previous section but for the server rather than for individual bidders.

Fill rate for creatives, creative upload errors

Metrics on creatives indicate cache availability and ad traffic quality because issues related to receiving the creatives are often localized within individual companies.

To be continued

The advertising infrastructure is critical to the operation of FUNCORP services. We’ve always paid a lot of attention to Prebid, one of the main header bidding auction mechanisms for our apps. The decision to support the client and the server-side of Prebid increases the responsibility manifold.

Therefore, in addition to the effort we’ve already made, we are going to develop our own support for Prebid infrastructure in the following areas:

  • Stable operation and its constant monitoring
  • Security, including protection against various cyber threats, prevention of advertising fraud, as well as safety of our own and our partners’ interests
  • Transparent operation of all FUNCORP services as advertising platforms and the Prebid Server. This includes regular external audits, constant integration with leading analytics tools, and provision of detailed information to our partners
  • Further optimization of the program code and infrastructure for Prebid operation, including at the point of interaction between the client and server parts

We’ll keep you informed about all the changes and improvements in these areas, as well as about the development of our Prebid service. Follow us on Medium for updates!