Press "Enter" to skip to content

Database monitoring tool – What features do you need?

0

Introduction

Today’s application development ecosystem is flourishing, thanks to democratization of development tools (open source stacks), growth in computation power and ability to manage massive volumes of data. This has led to a refreshing breed of applications across industries but developers are also always on the edge ensuring that their apps maintain top performance lest their customers churn. Hence, application performance monitoring (APM) has gained significant popularity today.

The Hidden Workhorse

Despite availability of many APM tools, the occurrences of downtimes have been on the rise. When researching the primary reasons for this, we find that a large proportion of performance downgrades are caused due to avoidable issues with the hidden workhorses of applications, the Database. While there are a few tools that help monitor DBs, DBAs who are responsible for the upkeep of DBs are entangled in many challenges and a good monitoring tool will need to address these challenges.

db-tool

Challenges of a DBA

DBAs grapple with many issues to keep their databases performant and available at all times. Some of the challenges they face in their jobs are:

 

  1. Multiple DB platforms: Last decade has seen a big addition to the popular RDBMS DBs (Oracle, DB2, SQL Server, etc) in the form a popular open-source ecosystem of Non-relational DBs (Hadoop, MongoDB, Cassandra, etc)  to handle larger volumes and varieties of data. Typically, an organization uses multiple DB platforms, unlike in the past, to support different types of applications.
    DBAs, conventionally experts at a single platform or type of technology, are required to take up stewardship of multiple platforms. Developing expertise across a fast-growing bouquet of technologies while at the same time discharging normal DBA functions is quite a challenge.
  2. Growing instances: Growth of data is on a steep exponential curve. An interesting statistic in 2014 mentioned that 90% of ALL the data generated until then happened in just the last two years! This points to just one possibility – massive growth in the number of DB instances in the world to support all the data. As a result, the number of instances that a DBA has to manage is also rising and monitoring hundreds of critical parameters manually is no mean task. Moreover, resolving all the issues that arise in DB platforms by poring over log files is an extremely time consuming task.
  3. Shift in responsibilities: Traditionally, a bulk of a DBA’s time went in creating DB instances and maintaining them regularly, ensuring uptime and consistent performance at all times. Today, though, given the criticality of a DB and stringent customer SLAs, a DBA is required to reinvent their job-description and contribute to more value adding activities that can drive down DB related costs, improve security, etc. However, the time DBAs spend on the usual/regular activities continues to increase.
    This can only be achieved by efficiently automating some of their critical yet repetitive jobs to create time for the new responsibilities.

What must a good DB monitoring tool do?

To achieve the required application performance levels, what is needed is not just a tool that monitors databases but a tool which helps DBAs go beyond the challenges they face today.

Below is what we think a DB monitoring tool must be:

  1. A single pane of glass: Given that a DBA is required to monitor multiple platforms, a good monitoring tool must provide support for a wide variety of databases on a single view. At the least, we believe the tool must easily integrate with the 20 most popular DB platforms, both relational and NoSQL, like Oracle, DB2, SQL Server, MySQL, Redis, Couchbase, MongoDB and Cassandra to name a few. This, in a sense, saves the DBA from having to use multiple tools or keep looking for new tools for newer DBs.
  2. A master of DBs, not just a jack: Remember, DBAs may not possess expertise of all the DBs that they handle given the rate at which new DBs are introduced. A good monitoring tool offloads that responsibility off the DBA and provides an in-depth and contextual information about each DB that it supports.

    Coverage of metrics like resource utilization, database objects, query response times, wait times, buffer usage, latches, change events, OS statistics, read/writes, top queries, etc. must be a default. Additionally, the tool must also capture critical parameters specific to the different DBs. In this context too, more the merrier – the tool should prefetch all the data exposed by the DB for monitoring – the DBAs can then choose the ones that may not be too relevant to them.
  3. As close to ‘real-time’ as possible: Oracle DB, for instance, exposes close to 2500 metrics about its health. This could lead to generating many GBs of data in a day for each DB. A good monitoring tool will be able to process all of this data but a phenomenal tool would deliver these metrics to the DBA within a seconds of it being generated. Watch out for numerous tools which claim to be ‘real-time’ but make information available after many minutes. For a DBA, the world could turn upside-down in a 5-minute window.
  4. An intelligent time-saver: The argument of alert fatigue crops up each time there is a discussion of monitoring tools. Most commonly, tools allow DBAs to configure alerts whenever a metric of interest breaches a threshold. How does this really helps a DBA when there are hundreds of alerts in a span of minutes? Remember the discussion about a DBA’s time crunch and changing priorities.

    A good monitoring tool has ample intelligence sprayed on top of the thousands of metrics and alerts it captures. Look for the capability of a tool to autonomously identify anomalous patterns in the DB metrics and also distil the possible causes of these events. This can greatly reduce the time a DBA spends on issue resolution.
  5. Simple to use: The previous points may have painted an extremely complex image of the tool but remember, “Simplicity is the ultimate sophistication”. In an already time-constrained DBA life, there is definitely no time for a technical-genius’ complex product. The tool needs to be extremely simple to use in the following aspects:
    • Installation and setup – Preferably a single click must complete the setup of a DB monitoring application
    • Configuration – Easy to use UI to configure DB details, set thresholds, notifications and alerts
    • Visualization – Allow drag-and-drop graphing/reporting options to the DBA as a tool cannot pre-empt and predict all the needs of a user
    • Ad-hoc Search and Analysis – Like we already mentioned, it is not possible for a user to code for all the scenarios that the DBs are going to present. The tool must provide an easy and centralised way to search log files for specific events and clues about a certain issue

Conclusion

The life of a DBA can get extremely complicated at times with different types of databases and different protocols. However, choosing a database monitoring tool wisely could be the difference between constant firefighting or (comparatively) smooth sailing.

Leave a Reply

Your email address will not be published. Required fields are marked *