Paulette McCroskey
Social Media Manager, Advanced Systems Group, LLC
Image of Firebaser Friday blog header

Welcome to #FirebaserFriday!

Join us for the monthly mini-profiles on Firebase team members, aka “Firebasers”, from all around the world! Learn about their backgrounds, how they got started with Firebase, what Firebase products they work on, their favorite ways to de-stress, advice, and more.

For our first feature, we’re happy to introduce you to a long standing member of the Firebase community. You may know his work from Firebase Release Notes, one of his talks at Google events, or used one of his many community responses from Stack Overflow. And now, we give you puf!

gif of Firebaser puf

How did you get started with Firebase?

I was in the habit of answering questions on Stack Overflow to learn about interesting new technologies, when questions about Firebase started showing up. My answers got noticed, and that led to me becoming a member of the Firebase team at Google. And 7 years later, answering questions about Firebase on Stack Overflow is still one of my favorite things to do each day!

What are you working on right now?

I just gave a talk at Flutter Vikings about synchronizing game state in a Flutter app, and, as part of the Flutter Puzzle Hack, about adding Firebase to your Flutter app. I'm also working with the Flutter GDEs to get more/better Flutter answers on Stack Overflow. Between that, answering questions on Stack Overflow, and the monthly Firebase Release Notes video, I keep busy. :)

Do you have a nickname? How did you get it?

Not sure if it counts as a nickname, but I am known as puf. While I respond to both Frank and puf equally, I'm usually the only puf in the room. I got the nickname in my teens, as I was really tired of all the variations of my last name, and I've been known as “puf” to most people since then.

What are you reading right now?

I read a lot. I just finished The Lincoln Conspiracy and am halfway through Carve the Mark and The Brothers Karamazov.

Where can we find your work?

You can find me on Twitter here.

Viswanathan Munisamy
Software Engineer
blog header image

Mobile users expect their apps to be fast and responsive, and apps with a slow startup time, especially during coldstart, can leave them feeling frustrated. Firebase Performance Monitoring is a tool that helps you gain insight about various app quality signals, including startup time of your application. This article gives a deep dive into the Firebase Performance Monitoring tool and its impact during an Android application’s cold start.

Library impact at startup

Modern Android applications perform multiple operations until the application becomes responsive to the user. “Startup time” of an application measures the time between when an application icon is tapped on the device screen, to when the application is responsive. Code executing during startup time includes the application's code, along with code from dependencies that are involved in app startup. Any code that executes until the first frame is drawn is reflected in the time to initial display metric.

Image showing the App and SDK startup time

Many libraries do not need to get initialized during the application startup phase. Some libraries, like Firebase Performance Monitoring, provide value by initializing during the startup phase. It is this early initialization that enables measurement of app start, CPU/Memory impact during early phases of startup, and performance of network requests during application startup. But, this could lead to an increase in the startup time of the application.

You can measure the impact of a library during your app’s startup by creating 2 versions of your app (with and w/o the library) and measuring the “time between Application launch to Activity firstFrameDrawn in both to compare.

Firebase Performance Monitoring at startup

Firebase Performance Monitoring performs the following tasks during an application cold start:

  • Registers dependencies
    • Configuration management: Controls measuring performance metrics on a percentage of devices by looking at locally cached configurations
    • Firebase Installations: Installation service for each Firebase installed device
  • Initializes the performance library
  • Tracks the startup time of the application
  • Measures detailed system metrics (CPU/Memory) for a fraction of the cold starts (See sessions).

Firebase Performance Monitoring does all these without developers needing to add any lines of code to their application.

Image showing the timeline of performance monitoring app startup time optimization

Performance Monitoring app startup time optimization

There are many tools available to profile the performance of an application. We used the following tools for measuring the startup time and analyzing the impact of every single method in our library during application cold start.

Macrobenchmark is a recent tool launched at Google I/O '21 to measure the startup and runtime performance of your application on physical devices. We used this tool to measure the overall time taken by an application during a cold start.

Method tracing enables understanding all the classes/methods that were involved in the runtime of an application. This lists down the different methods that get executed at runtime and the duration each method takes for execution. Due to the overhead of method tracing, the duration of methods as specified in the trace file is bloated. Nonetheless, this duration can be used as a guiding metric to understand the impact of a method.

Using the method tracing APIs, we identified multiple opportunities within the application’s lifecycle events to reduce the impact of the library during application startup, including:

  • Content Provider
  • Activity Create
  • Activity onResume

We optimized the library in all these phases. Some key optimizations include:

  • In the content provider phase, we moved away from eager initialization to lazy initialization creating components when needed. Eg: dependency initialization.
  • In the phases of activity onCreate() and onResume(), we moved many of the non-essential operations from main thread to run on background thread allowing the main thread to focus on the applications’ needs (#1, #2)
  • Delayed initialization of certain non-urgent firebase performance components to a later time after the application has performed its startup operations. Eg: Event dispatch service
  • Delayed fetching of remote configurations Eg. Firebase Remote Configuration

Impact of Firebase Performance Monitoring

To benchmark the impact of Firebase Performance Monitoring, we built a simple Android application. Benchmarking performance depends on various factors. To make this measurement close to user needs, we measured the startup time with the following factors.

  • Simple empty android application
  • Samsung 2019 model device
  • Android API Level of the device - API level 29
  • Macrobenchmark version: 1.1.0-alpha09 (Compilation mode: Full)
  • Firebase performance library version - 20.0.4

We ran the application with and without Firebase performance library to understand the startup time impact caused by the library. We used macrobenchmark to measure the duration of the startup time.

For an empty application with the above ground conditions, the table below captures the impact of the application startup before and after the optimizations.

Image showing a chart capturing the impact of the application startup before and after optimizations

With all the above changes, we have been able to reduce the impact of the library during startup time by more than 35%.

What’s next?

Application startup time improvement is a moving target for all developers. Though device hardware has been dramatically improving in recent years, the challenge of improved startup time performance continues to push barriers. We are continuously investing in reducing the startup time impact.

Update your Firebase Performance Monitoring SDK to the recent version to get these recent improvements on startup time, realtime metrics and alerts.

Sachin Kotwani
Senior Product Manager
Elvis Sun
Software Engineer

Mobile app and game developers can use on-device machine learning in their apps to increase user engagement and grow revenue. We worked with game developer HalfBrick to train and implement a custom model that personalized the user's in-game experience based on the player's skill level and session details, resulting in increased interactions with a rewarded video ad unit by 36%. In this blog post we'll walk through how HalfBrick implemented this functionality and share a codelab for you to try it yourself.

Background

Every end-user or gamer is different, and personalizing an app purely based on hard-coded logic can quickly become complex and hard to scale. However, taking all relevant signals into account, game developers can train a Machine Learning model to create a personalized experience for each user. Once trained, they can “ask” the model to give the best recommendations for a particular user given a set of inputs (e.g. “Which item from the catalog should I offer to a user that has reached skill level ‘pro’, is on level 5, and has 1 life left?")

Custom ML models can be architected, tuned, and trained based on the inputs that you think are relevant to your decision, making the implementation specific to your use case and customers. If you are looking for a simpler personalization solution that doesn’t require you to train your own model and where the answer is unlikely to change multiple times per session (e.g. "What is the right frequency to offer a rewarded video to each user?") we recommend using Firebase Remote Config’s personalization feature.

Image showing game from Halfbrick

We recently worked with game developer HalfBrick on optimizing player rewards within one of their most popular games, with over 500M downloads, Jetpack Joyride. In-between levels, players are presented with an option to obtain a digital good by watching a rewarded video ad. Our objective was to increase the conversion rate for this particular interaction. Before the study, the digital good offered in return for watching the ad was always selected at random. Using a custom ML model, we were able to personalize which reward to offer using inputs such as the gamer’s skill level, and information about the current session (e.g. why they died in the last round, what powerups they selected), and increased conversions by 36% in just our first iteration of the experiment. Once trained, the model runs fully on-device, so it doesn’t require network requests to a cloud service and there are no per-inference costs.

Solution

ML workflows largely anchor themselves on to the following components: A problem to solve, data to train the model, training of the model, model deployment, and client-side implementation. We have described the problem above, and will now cover the rest of the steps. You can also follow along with more detailed implementation steps in the codelab.

Image showing how to collect the data necessary to train the model

Collect the data necessary to train the model

HalfBrick collected 129 different signals to train the model. These were of different types, such as event timestamp, user skill levels, information about recent sessions, and other information about the user's gameplay style. This is where intuition around which factors could influence the outcome are very helpful (e.g. a user’s skill level, point balance, or choice of avatar may be useful when predicting which powerups they’d prefer).

We can collect training data easily by sending events to Google Analytics for Firebase and then exporting them to BigQuery. BigQuery is Google’s fully-managed, serverless data warehouse that enables scalable analysis over petabytes of data. In our scenario, it makes it easy to summarize and transform the data to get it ready for model training, as well as combine it with other data sources our app might be using

Train the model, evaluate its performance, and iterate

Training requires a model architecture (e.g. number of layers, types of layers, operations), data, and the actual training process. We picked a reference architecture implemented in TensorFlow, and iterated both on the data we used to train it and the architecture itself. In a production workflow we would retrain the model regularly with fresh data to ensure that it continues to behave optimally.

In our codelab we have abstracted away the process of refining the model architecture, so you can still implement this workflow without having much of a background in ML.

Deploy the latest version of the model

Traditional ML models that run in the cloud can be easily updated, just like you would a web application or service. With on-device ML we need a strategy to get the latest model onto the user’s device. The first version can be bundled with the app, so that when the user first installs it there is a version ready to go. Subsequent updates are made by deploying the models to Firebase ML. With a combination of Remote Config and Firebase ML you can direct users to the right model name (e.g. “active_model = my_model_v1.0.3”), which will then automatically download the latest available model.

Run the model on the device

Once the model is deployed on the device, we use the TensorFlow Lite runtime to “make inferences” or ask questions of the model based on a set of inputs. You can think of this as asking, “What digital good should I show a user based on the following characteristics?” We configured our implementation so that a certain percentage of sessions are shown a random option, which will help us gather new data and keep the model fresh and updated.

Try it yourself!

We have created a codelab for you to easily follow and replicate these steps in your own app or game. In this implementation you will encounter a scenario similar to HalfBrick's, but we've reduced the number of input features to 6 to make it easier for you to replicate and learn from. You should start with these and iterate as you see fit.

We hope you can use this ML workflow to build smarter and more engaging apps for your users and grow your business!

Stephen McDonald
Developer Relations Engineer, Google Pay

Back in 2019 we launched Firebase Extensions - pre-packaged solutions that save you time by providing extended functionality to your Firebase apps, without the need to research, write, or debug code on your own. Since then, a ton of extensions have been added to the platform covering a wide range of features, from email triggers and text messaging, to image resizing, translation, and much more.

Google Pay Firebase Extension

We're now excited to have launched a new Google Pay Firebase Extension at Firebase Summit 2021, which brings the ease of the Google Pay API to your Firebase apps.

Image that says Make payments with Google Pay

With the Google Pay Firebase Extension, your app can accept payments from Google Pay users, using one or more of the many supported Payment Service Providers (or PSPs), without the need to invoke their individual APIs.

With the extension installed, your app can pass a payment token from the Google Pay API to your Cloud Firestore database. The extension will listen for a request written to the path defined during installation, and then send the request to the PSP's API. It will then write the response back to the same Firestore node, which you can listen and respond to in real time.

Open Source

Like all Firebase Extensions, the Google Pay Firebase Extension is entirely open source, so you can modify the code yourself to change the functionality as you see fit, or even contribute your changes back via pull requests - the sky's the limit.

Summing it up

Whether you're new to Google Pay or Firebase, or an existing user of either, the new Google Pay extension is designed to save you even more time and effort when integrating Google Pay and any number of Payment Service Providers with your application.

Get started with the Google Pay extension today.

Jon Mensing
Product Manager

An important part of turning your app into a business is to optimize your user experience to drive the bottom line results you want. A popular way to do this is through manual experimentation, which involves setting up A/B tests for different components of your app and finding the top performing variant. Now, you can save time and effort - and still maximize the objectives you want - with Remote Config’s latest personalization feature. Personalization harnesses the power of machine learning to automatically find the optimal experience for each user to produce the best outcomes, taking the load off you.

At Firebase Summit 2021, we announced that Remote Config personalization is officially available in beta! Let’s take a closer look at this new feature, how it differs from A/B testing, and how you can use it today to grow your business.

What is Remote Config personalization?

Remote Config lets you dynamically control and change the behavior and appearance of your app without releasing a new version or setting up any complex infrastructure. You can use Remote Config to implement feature flags, perform A/B tests, customize your app for different user segments, and now, with personalization, optimize your user experience with minimal work. All you need to do is specify the objective you want to maximize, and personalization will continuously find and apply the right app configuration for each user, taking their behavior and preferences into account and tracking impact on secondary metrics along the way. For example, you can personalize the difficulty of your game according to player skill levels to maximize engagement and session duration.

Image from Firebase Summit showing animated characters

How does Remote Config personalization differ from A/B testing?

A/B testing and personalization are both good frameworks for app optimization. While they share some similarities, there are a few big differences that are worth pointing out. First, A/B testing requires you to be hands-on throughout the whole process - from setting up the experiment, determining the variables, monitoring and analyzing results, to rolling out the winning variant. With personalization, you determine the experiences you want to try and state the objective you want to maximize. Then, the personalization algorithm uses machine learning to do the rest. It automatically tries different alternatives with different users, learns which alternatives work best, and chooses the alternative that is predicted to maximize your objective.

Second, A/B testing finds a single, global optimum, while personalization gets more granular to find the optimum treatment for each user so you’re not leaving value on the table.

And a third important difference is the timing required for each feature. A/B testing usually takes at least a few weeks to run an experiment and return a result, whereas personalization goes to work immediately, optimizing selections from the moment it is enabled.

When should you use A/B testing vs. Remote Config personalization?

Table showing when to use A/B testing vs Remote Config personalization

Halfbrick, the game studio behind titles like Jetpack Joyride, Dan the Man, and the instant-classic Fruit Ninja, used personalization to optimize ad frequency, which led to a 16% increase in revenue without affecting engagement or retention. They also used personalization to determine the best time (i.e. when users are most enjoying the game) to ask users to rate their app, and were able to boost positive app store ratings by 15%.

In their own words:

"The granularity achieved with Remote Config's personalization feature is impossible for a human to instrument. Personalization has given us new insight into how we can optimize our ad strategy and even helped us challenge our own assumptions that players don't like too many ads."


— Miguel Pastor, Product Manager, Halfbrick

Ahoy Games, another early customer, tried personalization in a number of their games and successfully grew in-app purchases by 12-13% with little to no effort from their team.

Their CEO, Deniz Piri, had this to say:

“We are very impressed with how magical the personalization feature has been. It's so much more than an A/B test, as it continuously optimizes and serves the right variant to the right groups of people to maximize conversion goals. Without Firebase, we would have a harder time succeeding in an arena full of bigger corporations with our humble 13-person team.”

How to set up personalization in your app

Let’s walk through an example - say you wanted to personalize the moment you show an ad to players in your game based on how many levels they’ve completed, with the objective of maximizing the number of ad clicks generated in a gaming session.

We’ll suppose you have three alternatives that personalization can choose from:

  1. Show an ad after every level completed
  2. Show an ad after every three levels completed
  3. Show an ad after every seven levels completed

Let’s look at how you can set this up in your application using Remote Config.

Alternatively, you can also check out this video walkthrough of the personalization feature which includes an overview and an example on how to personalize the timing for showing an app rating prompt to maximize the likelihood of users submitting a review. You can also check out the personalization documentation for complete instructions on getting started with personalization.

The first step will be to go into the Firebase console and into the Remote Config section to create an RC parameter that can be used to provide one of these alternatives in your application. This can be done by navigating to the Firebase console > Remote Config and clicking on “Create configuration” if it’s your first time using Remote Config, or “Add parameter” if you already have some created. This will open up the parameter editor as shown below.

Image showing create parameter

Next, click on Add new > Personalization which will open the personalization editor where you can specify alternative values for the parameter, select the primary objective, as well as set additional metrics and targeting conditions for the personalization. In this example, I’m using ad clicks as the primary optimization goal, and tracking user engagement as a secondary metric to monitor as personalization delivers personalized values to users.

Screenshot 1
How to create a personalized parameter, including personalization goals and additional metric tracking within a few clicks in the Remote Config parameter editor

How to create a personalized parameter, including personalization goals and additional metric tracking within a few clicks in the Remote Config parameter editor.

Now you’ll just need to click on “Save” and “Publish changes” to make the new parameter available to any running application instances. The final step is to implement and use your new Remote Config parameter in your application code. You can follow the getting started guide to complete these final steps based on which platform your app is targeting.

From here, personalization will go to work immediately, selecting the best predicted alternative for each user, and collecting metrics along the way to help you determine the effectiveness of personalization in optimizing towards your primary goal. Over time, you’ll see a results summary screen similar to the one in the screenshot below:

Screenshot of Personalization in Firebase

The box in gray represents the baseline holdout group, while the box in blue represents the group of users who’ve received personalized values. The total lift shows how much additional value personalization has generated relative to the holdout group that didn’t receive personalized values. Since the baseline group will be much smaller than the personalization group, the numbers in the baseline holdout group are scaled up so that the numbers are comparable, and the total lift can be calculated. You can also breakdown baseline performance details to see how each alternative value performed individually.

As time goes by, you can revisit the results summary page to ensure that the personalization is continuing to deliver more value than the baseline group, maximizing your goal automatically as personalization and Remote Config do their work.

Now that personalization is available in public beta you can start scaling your app today without scaling your effort. Check it out in the Firebase console today, or take a look at our documentation to learn more.