MetricKit Internals: Insights into your iOS app performance

Igor AsharenkovMarch 24, 2023


At AppSpector, we’ve spent some time playing with all of the stuff Apple has presented at the WWDC. This blog will look at MetricKit and the tools behind it for an improved app monitoring experience.

With MetricKit, measuring your app performance during development is a piece of cake. Xcode provides gauges with memory and CPU load. You can attach Instruments to the simulator or your test device and even write custom instruments

For more, see our articles about custom instrument packages: part 1 and part 2.

Understanding the importance of performance tuning stops you from measuring almost anything your app does. Still, things get complicated when we talk about the AppStore environment when your app goes to real users. No matter how thoroughly you test your app, the natural world always has a bunch of surprises for you that will influence performance and user experience.

Of course, many tools are out there gathering various metrics in the production environment. Still, most of them are limited by the iOS SDK restrictions and by the influence on application behavior by the actual monitoring.

Apple decided to fill the gap and bless developers with a tool that helps them gather and analyze app performance metrics in the production environment. It consists of MetricKit (a framework that gives you access to metrics provided by the OS) and a separate tab in the Xcode 11 organizer where you can find metrics from your apps. We will pause on MetricKit because the metrics browser in Xcode will only work with apps submitted to the AppStore.


The framework architecture is relatively straightforward. The central part is taken by the MXMetricManager class, which is a singleton that provides most of the framework APIs.

In general, the workflow has three main steps:

  1. You initialize MXMetricMnager and assign an observer to it.
  2. You optionally implement custom metrics in your app using signpost APIs
  3. And finally, you deal with received metrics in the observer's didReceivePayloadsmethod (i.e., send them to your backend for further analysis.)

Metrics come to you in an array of MXMetricPayload instances. The payload then encapsulates sets of metrics metadata and timestamps. It is a simple wrapper for MXMetric subclasses. It’s separate for metric type.

Apple documents metric types, so we will only stop here for a short time. However, we must stop to notice one interesting thing - MXMetric provides a public API to serialize it to NSDictionary or JSON, which is a bit unusual.

MetricKit internals

MetricKit is straightforward, but it’s always exciting to see how things work from the inside. Diving into something more profound is always more intriguing if you have a specific task. So I decided I wanted to feed MetricKit stubbed metrics and then force it to deliver metric updates whenever I wanted.

Of course, you can use the `Debug -> Simulate MetricKit Payloads` command in Xcode, but it doesn't allow you to feed your metrics data. True, it’s not a very useful command, but it gives you a direction in your research, and it's fun ;).

To start executing the task, we need the MetricKit. Xcode shows it to you in the frameworks list when you add it via the “link binary with libraries” dialog. If you open the MetricKit, framework, you will see the MetricKit.tbd file inside (sized at just 4kb).

TBD stands for 'text-based dylib stub' and is a YAML file with a dylib description, exported symbols, and a path to the dylib binary. Linking against tbd files reduces binary size. Later, the real dylib binary will be loaded from the OS at runtime using a path provided in the tbd file. Here is what the file looks like when you open it in Xcode:

Using a path from the TBD file, we can quickly get the MetricKit binary for further research, but there is an even more straightforward method.

The Mach-O header section of our application's binary data includes a record of the paths to every dynamically linked library, which can be easily obtained using otool with the -l flag.

Here is the output for a test project I have built:

→ otool -l ./Metrics | grep -i metrickit
name /System/Library/Frameworks/MetricKit.framework/MetricKit (offset 24)

We can see the same path we saw earlier in the tbd file. Having a binary of the framework, we can finally look at the internals. I usually use Hopper Disassemble for this. It’s easy to use yet a potent tool to inspect binaries.

Once we open the MetricKit binary - we navigate to the ‘Proc’ tab and expand the ’Tags’ list, allowing us to see all the exported symbols. Selecting one of them (for example, the MXMetricManager), we can see all its methods below, and by choosing this way, we can see its disassembled content on the right:

When browsing through the MXMetricManager method list, it’s easy to notice the '_checkAndDeliverMetricReports’ method. It looks like this is what we need to call to force MetricKit to deliver updates to subscribers.

Unfortunately, trying to call it didn’t result in a subscriber call, which probably means it delivered no metric data. By looking at the method implementation, we notice a few interesting things: it iterates content of the /Library/Caches/MetricKit/Reports directory.

Then it tries to unarchive the MXMetricPayload instance from each item on the disk, and in the end, iterates registered subscribers and call the ‘didReceive’ method with the payloads list.

We don’t have anything under /Library/Caches/MetricKit/Reports, but we need some archived MXMetricPayload instances. So let’s build them and put them on the disk before calling ‘ _checkAndDeliverMetricReports.' Again, the plan is to create an MXMetricPayload instance, develop and add any MXMetric to it, and then archive the payload instance on the disk. Calling ‘_checkAndDeliverMetricReports’ after all that happens should result in our subscriber call with our stub as an argument.

When looking through Apple docs on payload and metrics, you could notice they don’t have any public initializers, and most properties are read-only.

Again, we return to Hopper to look at the MXMetricPayload methods list:

Here, we can see its initializers and methods to assign metrics. Calling all of the private methods is easy with NSInvocation and ‘performSelector’ due to Objective-C dynamic nature.

For example, we’ll build a CPU metric and add it to the payload (you can find a complete code snippet here).

In the end, we archive the build payload instance and write it to the /Library/Caches/MetricKit/Reports directory.

Now it’s time to call the ‘_checkAndDeliverMetricReports,’ which should finally result in a subscriber call. This time passing our stubbed payload as an argument.

Where do metrics come from?

Getting metric reports is easy with the MetricKit, but you are likely interested to find out how reports appear in your app /Library directory. Here’s how it’s done: While digging inside the app binary, I noticed this method: ' _createXPCConnection.’ Inspecting its implementation makes it clear - it builds NSXPCConnection to service with a name '’ and two interfaces, ‘MXXPCServer’ and ‘MXXPCClient’ for client and server sides. If you look at the protocol description:

And the MXMetricManager initializer, it will become evident that the MetricKit registers itself as a client for remote service, which puts report files into the app's container. But this post is already way too long, so we’ll explore how the MetricKit XPC service works in one of our next posts.


The MetricKit is a unique and irreplaceable tool if you care about your app performance under real circumstances in a production environment.

Unfortunately, it’s not possible to look at the Xcode organizer’s ‘Metric’ UI at the moment, except for what the demo at the WWDC session showed us.

The Xcode organizer’s ‘Metric’ UI could be a priceless tool for moving your user experience to the next level by eliminating glitches and performance issues in your code.

One disadvantage I can see right now is the lack of details for each metric type: the only separation is the app version, and you can’t see any metrics for an identical group of devices/OS versions/regions, etc.

But, of course, you can always send your metrics data to your service for further processing along with any vital info you need. You can attach it to the issues in your bug tracker. At AppSpector, we are already working on extending our performance monitor functionality with data obtained from the MetricKit.