ArcGIS Online

Tracking your ArcGIS Online Feature Data Store Key Health Indicators

Every time you visit the doctor, a familiar routine takes place. Temperature, heart rate, blood pressure—information collected and retained.

Why do we do this? You might be there for a sore pinkie toe. Surely heart rate doesn’t impact that.

I Am Not A Doctor—but I know these measurements are done in part to establish a baseline, a collection of data points representing the range of values considered healthy or normal for your body.

Variation from this baseline can be a more revealing health indicator than the number on its own.

In December 2020, ArcGIS Online introduced the Feature Data Store Resource Usage Chart to Premium Feature Data Store organizations. This tool displays your Feature Data Store’s key health indicators over time, offering administrators precise insight into its typical range of values. Early awareness of changes in resource use patterns offers administrators the opportunity to take corrective action before problems arise.

In this blog, we’ll first learn how to access the usage chart, and then we’ll discuss how getting to know your Feature Data Store baseline will help you keep it in good health!

For more information about Standard and Premium Feature Data Store, including pricing and capabilities, please visit our ArcGIS Online FAQ.

https://doc.arcgis.com/en/arcgis-online/reference/faq.htm#anchor7

Accessing the Usage Chart

To begin, log into your organization with an Administrator role, then navigate to your Organization Overview page.

From here, you can see what size Feature Data Store your organization is using, and the percentage of feature data storage which is already used.
Click on the “Feature Data Store” hyperlink to access the Usage Chart.

Org Settings Data Store Type and Storage

This expands the administrator’s insight into Feature Data Store and its two key values: feature data storage and feature data compute — which I like to call “horsepower.”

Storage and Compute
Feature Data Storage and Feature Data Compute

Getting to know your Feature Data Store’s Baseline

Our heart rates and blood pressure fluctuate throughout the day, based on exertion, food intake, ambient temperature, and sometimes for no reason we can discern.

Similarly, your data store’s storage and computational use will fluctuate. Healthy systems perform various self-care routines and recycle their processes periodically—these can result in some visible and normal movement in computational resources.

In addition to these minor variations, our organization’s workflows vary. On some days, your data store may be running a marathon and on others, it may be enjoying a lazy Sunday. If 350 fieldworkers return from the field and sync their offline edits simultaneously, we’ll likely observe a corresponding increase in computational load. And at night (or lazy Sundays) our Feature Data Store may register little or even no activity.

The key to managing your resources lies in recognizing what range of data points lie within your Feature Data Store’s normal use patterns, and what to do if you see unusual variation.

Storage

Storage

Storage includes three metrics: your Feature Data Store’s total, or allocated space is represented by the horizontal line. Used space is shown in dark grey. And remaining storage, or available storage is shown in light grey.

In my example above, the Feature Data Store is allocated 1 terabyte (TB) of total storage space. Of that space, there are 933.48 gigabytes (GB) remaining. At a glance, I can see my organization has plenty of room available for additional feature data.

Used and Available

This information helps you maintain awareness of your organization’s storage use. If you observe increasing storage consumption, you can run Organization Reports to find large items or items you and your team can identify as being stale or otherwise unnecessary. And if you hear widespread user reports of consistently failing attempts to add new features or upload data, check your storage use–these could be symptoms of running out of space.

But if you and your team validate all of the content is necessary, it may be time to consider upgrading to a larger Feature Data Store.

Upgrade your Standard Feature Data Store to a Premium Feature Data Store online, or move between Premium Feature Data Store levels by contacting your Account Manager. Upgrading and downgrading between ArcGIS Online Feature Data Store levels requires no work from your team, and all content and URLs are unaffected. Upgrading from Standard to Premium or to a higher Premium level can happen at any time with a minimum billing period of 30 days.

Computational Resources

Computation is essentially: the work being done by the database engine powering ArcGIS Online Feature Data Store. Think of this like a vehicle–if you need to move a ton of gravel, a pickup truck with a lot of torque will move much faster uphill than a passenger car carrying the same weight.

To apply this reasoning to ArcGIS Online, we have to understand what work is done by the feature data store. In the graphic below, you can see common workflows listed in order of most power use to relatively least power use, and some of the “multipliers” which can increase work done at the database level, e.g. concurrency, multipart features, etc.

Work done

This is a general idea of which workflows rely on Feature Data Store — operations performed on hosted feature services. Anti-pattern workflows are those which do not follow best practices, for example: enabling editing on a public feature layer in a web map. Best practices are blueprints which configure clients to conserve infrastructure. Without best practices, Feature Data Store must work harder. Frequent bulk edits and ETL can require more horsepower; I generally consider this effect when getting into 100K record territory and frequency higher than 15 minutes. There are no fixed rules–and the multipliers add much potential variation, but these are good general ideas to keep in mind as you build awareness of your team’s workflows and your organization behaviors via the usage chart.

The usage chart tracks an aggregate computational metric comprised of: percentage of CPU in use, input/output (I/O) volume, and memory use. Your organization’s consumption of these resources is tracked on the vertical (y) axis, and time is represented along the horizontal (x) axis.

 This example has lots of gaps, and some spikes—how can we tell, from this chart whether or not this Data Store is healthy? First, we’ll understand what gaps and spikes are.

Spikes

Gaps

The gaps in this chart are time periods during which this Feature Data Store used so little of its computational capacity it did not register on the chart at all. It does not mean the Feature Data Store is off, sleeping, or locked—your Premium Feature Data Store is always running.

Spikes

These spikes indicate rapid increases in effort at the database level, but they’re brief and not out of the ordinary for this organization or any of several others I work with frequently. I expect these are a blend of the afore-mentioned “self-care” routines or any number of more intensive analysis, edit, or query workflows I and my colleagues run continuously. These spikes are not correlated with latency or other unexpected behavior and are part of this organization’s normal routine.

However…

Sustained maximum usage
Sustained maximum usage

This pattern is sustained maximum usage–a time period during which 100% of available Feature Data Store power was in active use. This is not something I see frequently, and I’m confident it occurred during a time period in which hosted feature services were extremely latent. This indicates a bottleneck, resulting in end users waiting for their hosted feature service requests to get their turn at the database. Some end users, depending on what kind of client they’re using, may even have seen timeout responses.

In this circumstance, the ArcGIS Online administrator must consider their own organization’s needs and values to arrive at the next best action. The role this organization plays in your team’s business will be a factor. If this is a prototype environment for your GIS team, perhaps occasional latency is acceptable and the additional investment in a higher level is not aligned with your business priorities. If this ArcGIS Online organization is the primary vehicle for your high visibility engagement with public stakeholders, your risk assessment may be more conservative, favoring operational consistency over cost. There are many use cases, and the right decision is made based on applying knowledge of your organization’s values and needs.

ArcGIS Online Feature Data Store Resource Management Analysis & Best Practices

Understanding baseline behaviors is key to answering common questions about your Feature Data Store, such as:

Sizing

Is our Feature Data Store the correct size for our needs?

Based on the example organization shown earlier, I can see our current storage capacity is sufficient, and we have room to grow. If I notice any significant or sudden increases in storage consumption, I’ll connect with the owners of large or numerous data sets to learn the context around their storage use. Here again, ArcGIS Online Organization Reports support the team’s active discovery and evaluation of large, aging, redundant, or forgotten items.

The computational resources, though not under continuous use, comfortably accommodate spikes I know are related to our typical use case: inviting groups to check out and test new capabilities. Looking at the Feature Data Store levels (shown below), I know the next size down offers approximately half my current resources. This would not adequately serve my highest volume use cases, so it seems my organization is properly sized.

Levels

Forecasting

What Feature Data Store size will we need next year?

If you administer an organization which serves multiple teams, you’ll want to maintain communication with their leadership to understand plans for the future. My organization is expected to stay as it is; there are no plans for new data such as statewide parcel data or migrating a new division into my organization. And we do not anticipate adding new members or increasing our reliance on analytical workflows. With this knowledge, I can review our past growth and project a similar rate for the upcoming year.

Best Practices

1. Be on the lookout for indications that your organization’s needs may be increasing.

2. Consider the thresholds which require action.

3. Conserve energy through adoption of best practices

Take a quick look at your Feature Data Store usage chart each day, and you’ll quickly learn its baseline patterns. With this knowledge, you’re well-equipped to keep it in excellent health!

About the author

Marianne has been working in Esri's cloud since 2015, first with Managed Cloud Services and today as a Product Manager on the ArcGIS Online team. Her early days as a GIS professional found her in boots and trucks as a cartographer and GIS specialist in the great state of Nevada.

Connect:
1 Comment
Oldest
Newest
Inline Feedbacks
View all comments

Next Article

What's new in ArcGIS StoryMaps (March 2024)

Read this article