Many developers I’ve talked about web application performance with assume, that the major bottleneck of applications mass is database. Well, in some cases they are right, but who guarantees if it’s true for your particular case. Moreover some websites has no real-time content and most of db queries are cached, but performance still far away from ideal. How to identify the bottleneck in such situation?
A while ago I’ve used various good but not really veracious tools like build-in-framework profilers, wrapping code snippets with micrtotime() snapshots to figure out the slowest part of my application. But finally I came to the conclusion that they don’t provide me what I really want to know.
So, what’s wrong with them…
Build-in-framework profilers are fine if you just want to check how much time each component take or what db queries were executed, but if you want to know how long each method takes – it’s not enough. The issue here is that most of such profilers are based on pre-defined events/places in the framework and all other places remain as a “black box” for them.
Microtime snapshots. Well, nothing significant to say here. It’s fine if you want to know performance of a particular part in the code, but if you’re trying to figure out the slowest part – it just won’t work.
Finally, I found one solution which fits all my nowadays needs – xdebug.profiler. Honestly, I’ve used it a few times in a past but because of poor tools and confusing interpretation of cachegrind format with lot’s of lugs (file format used by xdebug profiler for function calls timings) I couldn’t get some useful information.
xdebug: install & configure
A few words about xdebug. It is a PHP extension for powerful debugging. It supports stack and function traces, profiling information, memory allocation and script execution analysis.
Here is my xdebug profiler configuration with some inline comments:
xdebug.profiler_enable = 1
#path for generated profiles
xdebug.profiler_output_dir = "/var/data/php-profiles/"
#file format of generate profiles. %u mean timestamp. By default it's cachegrind.out.%p but it would override profiles for the same script. Not really usefull if most applications has a single entry point: index.php :)
xdebug.profiler_output_name = "cachegrind.out.%u"
xdebug: how to get results
There are not so many software for cachegrind results interpretation. Most of them presented on the official xdebug page. Personally I’ve used WinCacheGrind, but if file becomes a few megabytes larger – WinCacheGrind crashes. Also I’ve tried web tool Webgrind, and it was fine with big files, but a little bit slow.
What I want is to get a simple answer to my question: how much each method “costs” me. Also I wish to work not only single profile-file, but the bulk of them. E.g. after I run something like
And none of listed tools provide me such apportunity.
Well, I’ve looked into Webgrind source and created a php util to process one or bulk of cachegrind files to csv report which include method name, number of calls, total self method cost, cost per call, cumulative method cost (cost with subfunctions calls) and cumulative cost per call. Quite simple, yeah?
xdebug profiler analyzator
I can say a lot on how efficient is this tool for me, but it’s better to show you how I use it, maybe you will love it as well.
-f, --format Report format: table or csv
-t, --top Top number of methods to show. Could be number or percentage: 10 or 20%
./xdebug profiler -r report.csv cachegrind.out.1340990144_014725
#display report table with 10 most slow methods
./xdebug profiler -t 10 cachegrind.out.1340990144_014725 cachegrind.out.1340990145_014890
#display report table with 20% most slow methods
./xdebug profiler -t 20% /var/data/path-to-folder-with-cachegrind-files/
Example of output:
Hope you will have a chance to try it and share your ideas/suggestions.