Skip to content

Resources for identifying performance issues

Many performance issues can be identified with available performance analysis tools. Once these performance issues are identified, developers should work towards correcting them.

HTTP request logs

HTTP request Log Shipping provides request logs for a site and can be analyzed with a variety of tools such as GoAccess. Logs include the page generation time, response code, and cache status for requests.

By analyzing a site’s traffic patterns, the slowest requests can be identified and attention can be focused on the code that needs to be optimized.

Review request logs for:

  • The presence of significantly slower requests
  • Patterns of bot activity or bot requests (usually through the logged User Agents) for pages that do not need to be indexed (e.g. bots should be requesting sitemaps and individual posts, but not crawling through every page of an archive or tag)
  • The presence of cache busting URL parameters
  • Requests for files in the WordPress /uploads directory, especially large images without resize parameters
  • Requests for invalid URLs that could be coming from page templates or Javascript, and might manifest as HTTP response status code 404 or other status codes in the 4005xx range

Insights & Metrics

The Insights & Metrics panel, located in the application view of the VIP Dashboard, provides insights into the performance, health, and usage of an application. 

Displayed data and metrics provide insights into an environment’s cache hit rates, frequency of inefficient queries made against the primary database, quality of performance of responses to HTTP requests, as well as markers for events such as code deployments and software updates.

Local (or non-production) environment

Though a local environment is not an absolute replica of a VIP environment, it can provide insights and understanding of the code executed during a request.

New Relic

New Relic is an Application Performance Monitoring tool that shows current and historical average page generation times. New Relic can also capture traces of slow requests. New Relic also has a summary of database queries, object cache usage, and remote requests, making it possible to identify potential issues specific to a table, function, or API endpoint.

New Relic can capture a trace if a particular URL has an intermittently slow issue due to object caches expiring. Traces can be reviewed to determine where the most time is being spent.

The Apdex score on the APM Summary page and data on the Transactions page are useful for identifying root causes for user dissatisfaction (usually the worst performing pages when taking into account transaction volume). If browser monitoring is enabled, New Relic can provide insights into the aspects of page performance that most need improvements. Review page performance statistics for the most visited pages, such as page load time, average page load time, and throughput (calls per minute).

Reported PHP errors, warnings, and notices can point out issues in code. The fewer code issues that exist on a regular basis, the easier it will be to identify, review, and resolve new issues. Ideally, an application routinely generates no errors, and few to no warnings.

PHPCS

Performing a local PHP_CodeSniffer (PHPCS) scan of the code in an application’s wpcomvip GitHub repository will flag many potential performance and security issues.

Query Monitor

Query Monitor can be helpful to identify PHP errors, slow queries, remote requests, and other anomalies on specific pages. Query Monitor also reports the page generation time.

During development, Query Monitor’s Profiling and Logging functionality can help keep track of feature performance before deploying code to production.

Runtime Logs

Runtime Logs reports PHP errors including fatals, warnings, and notices for WordPress applications, and output sent to stdout or stderr for Node.js applications.

Slow Query Logs

Slow Query Logs provide the ability to identify queries made by an application that take an unusually long time to execute. Slow queries should be optimized in order to improve database efficiency and overall responsiveness of an application.

Any query that consistently requires more than 100ms to complete should be evaluated for performance optimization. Slow Query Logs provide the the request URL and statistics related to the query. Query Monitor can be useful for more in-depth analysis of the query made by the URL and determine the impact that the query has on page generation time.

If it is determined that a slow query is consistently run by an uncached request, performance of the request should be improved by adding object caching to the underlying code, or by offloading the query to Enterprise Search.

Last updated: May 14, 2024

Relevant to

  • Node.js
  • WordPress