Find your content:

Search form

You are here

Writing Triggers - Apex Test classes - testing in Sandbox with very little data


I'm curious how others test their code in Sandbox when they have a Production environment that has more than 100k records in one or more databases. In my example, we have 350k Leads, and I'm writing code that "works", but which my test class is not adequate when I actually deploy - it's only AT deploy that I learn this, since my Sandbox has only a handful of leads in it to test with. I was getting ready to upload 110k leads to it and checked data storage restraints, and I get a whopping 10MB of storage in Sandbox (useless for this).

Any thoughts?


Attribution to: AMM

Possible Suggestion/Solution #1

For most things you should not need actual data to test with. Try to minimize data dependencies and insert data in the start of your test methods, it will automatically be removed (not even be seen on your organisation).

If your triggers are bulkified, and you test them both running a single record and a list of a few records, you should be safe. I believe testing is mostly to verify your code does what it's expected to do functionally. If you hit into new exceptions on your life organisation due to more data, I assume you wouldn't have included tests to scan f or that in your tests to begin with. I'd say that's part of the learning curve.

If you're going to do a lot of development, it is advised you move development from a conf/dev sandbox to a full sandbox before going to your production organisatoin. A full sandbox can contain more data, and is a mirror of your production org, so most things should already jump up in there.

Attribution to: Samuel De Rycke

Possible Suggestion/Solution #2

You're right, that it's a challenge. Since you are working on a single org (as compared to a package), things aren't too bad.

First, be sure you have unit tests that do bulk tests against 200 records. Next - when dealing with larger data sets, your primary limit concerns are typically number of rows retrieved and whether the queries are selective. Rows retrieved can be very dependent on the amount of data in the org and even the relations between objects. One answer is to use LIMITS terms on every query. Another approach is to use the developer console on the production org to perform typical queries from your application and see what kinds of results you are getting. If you see yourself retrieving larger numbers of rows, refactor or redesign your code - or add appropriate LIMIT terms.

Remember that you can call public methods in your code from the developer console. So you can do quite a bit of manual testing that way. The idea is to get a sense of where your code is approaching limits so that you can adapt the design as needed.

Because you can't test very large datasets on your sandbox, you'll probably never reach 100% certainty that the code will work. But by taking this approach, you should be able to come very close.

If you ever work on a package, the story changes. You have to get a larger developer sandbox for testing larger datasets.

Attribution to: kibitzer
This content is remixed from stackoverflow or stackexchange. Please visit

My Block Status

My Block Content