Salto for
Salesforce
Articles
SHARE
Pablo Gonzalez
February 12, 2023
11
min read
As Salesforce customers continue to use Salesforce for mission-critical business processes, the need for predictable releases has become more important than ever, leading to an explosion in jobs for Salesforce DevOps professionals and release managers.
As I wrote in SalesforceBen, one big problem is that DevOps skills don't come naturally to most Salesforce developers.
You need highly specialized skills like Git, YAML, Linux commands, etc. This means that an interview for a Salesforce DevOps role is different from your traditional Salesforce developer interview.
In this series, I'll come up with sample questions I would ask if I was interviewing a candidate for a Salesforce DevOps role, and I'll explain each possible answer.
This series will help you crack the Salesforce DevOps interview and land your dream job! Today's topic is Sandbox Management.
I assume you know the basics of sandbox management in Salesforce. These are not basic questions about sandbox types and limitations.
If you are new to managing sandboxes, I recommend the following two articles:
6 types of Salesforce sandbox plus a clever sandbox strategy
How to refresh your Salesforce sandbox—8 best practices
Let's now move on to the questions.
This question is about Sandbox strategy, which is all about how you organize your sandboxes to fit your development process.
Typically, organizations have an Integration sandbox where developers merge their changes, followed by a UAT sandbox where business users can test new features, do training, etc.
So, why not use the same sandbox for both activities? Why not use a full sandbox as both Integration and UAT?
Generally, I would say that an Integration sandbox is a "low-fidelity" environment. This means it's an environment you can't trust to be right 100% of the time. Developers are constantly merging changes, which means that code, flows, and other types of automation are constantly changing. This makes this sandbox very unreliable for end-user testing and training.
In contrast, UAT is meant to be a "high-fidelity" environment. Therefore, we expect changes to go into UAT only after they've passed unit tests and QA testing in the Integration sandbox. It also includes all production data; this makes it better suited for end-user training and testing.
For these reasons, keeping the environments separate and using one sandbox for each purpose makes sense.
Using a developer sandbox is almost always recommended so that developers don't step on each other's work.
If you work in a shared sandbox and your newly created feature doesn't work as expected, you don't immediately know if it's a problem with your implementation or whether another developer changed a related component that your logic depends on. This makes debugging a nightmare.
So, are there scenarios where having multiple developers in the same sandbox is the right choice?
A possible scenario is where some developers work on a specialized area of Salesforce that requires too much setup in the org for it to work correctly.
Let's say you have 3 devs that work on an integration with PagerDuty and Slack. PagerDuty and Slack (in this scenario) don't have sandboxes that everyone can connect to their developer sandbox.
Also, the integration requires the configuration of multiple custom objects in the org. Since data is not included in developer sandboxes, every developer needs to spend time configuring their sandbox to get the integration to work.
In this scenario, it might make sense to have these 3 devs (and no one else!) working on a single sandbox with all the configuration and integration endpoints to work with PagerDuty and Slack. This can reduce the burden on developers to configure their sandbox and bypass the limitation of PagerDuty and Slack not having multiple test environments.
The key is to leave other developers who are not part of this project out of this sandbox.
I know many will disagree with this design, but I've heard about it multiple times.
This is only a sample scenario. The idea of this question is to get you to think, and if you conclude that using a shared sandbox is never a good idea, then that's ok.
Just be 100% prepared to defend your argument and think about how you would solve this problem instead.
Personally, I would rarely recommend it, but I acknowledge there are some scenarios where it might be helpful (in the short term…).
Let's say developer A works on feature A in their developer sandbox. Once the feature is ready, they deploy it to the Integration sandbox (and eventually to UAT and Production).
How can developer B get this feature into their developer sandbox? This feature never went through their sandbox as it is not connected to the developer's A sandbox in the pipeline.
This can become a big deal in the future when developer B is asked to work on a feature that depends on feature A, but the code for A doesn't exist in their sandbox.
What you need here is what's known as a "back-promotion." A back promotion is simply another deployment. The difference is the direction of the deployment is not left-to-right (i.e., dev to Integration) but right-to-left (i.e., from Integration back into the developer's sandbox).
To do a back promotion correctly, you may need specialized DevOps tooling, like Salto. But the concept is simple: just another deployment in the opposite direction of the pipeline.
The Git branching strategy is almost always affected by the number of sandboxes you have.
Let's say you are a very small Salesforce team with only two sandboxes: your developer one and UAT. In this scenario, a simple branching strategy, such as GitFlow, might be perfect. You only work with two branches: development, which represents your in-progress work, and main, which represents your production org.
On the other hand, I've heard of a Salesforce team that only used the main branch. They considered sandboxes as "virtual" branches. They would commit all changes to the main branch and use the same branch to deploy to multiple sandboxes at different stages in the sprint cycle (this is known as trunk-based development).
Finally, if you have multiple sandboxes, such as developer > Integration > QA > UAT and eventually Production, it might make sense to have one branch per sandbox environment. That means one Integration branch, a QA branch, and so forth. This can help you isolate changes across different orgs, where each org has its own "source of truth."
The most common use case to clone a sandbox is to reuse the configuration and data from an existing sandbox. For example, let's say you create a full sandbox for UAT. You then spend days replacing integration endpoints, masking email addresses, configuring custom metadata types, etc., all typical post-refresh activities.
Now, let's say you have another team that requires its own full sandbox for a different project. Rather than creating another full sandbox from Production (and spending days configuring it again), they can simply clone the existing full sandbox. The clone will include all the data and metadata changes from the source sandbox.
The same applies to developer/developer sandboxes. You can create a developer sandbox, make any required configuration changes, and then have all other developers clone that sandbox. This way, everyone is working with the same baseline configuration.
You can also use sandbox cloning to keep a backup of your sandbox before refreshing it.
One of the challenges with developer sandboxes is they don't include any data. I've worked for applications that simply did not work without a huge amount of configuration data in custom objects, which made it really hard to replicate production scenarios.
There are two possible solutions to this problem:
A- Use a data-seeding solution. This can be a third-party solution or a simple script using the SFDX Data Move Utility.
B- Create a partial-copy sandbox instead.
So, when is it better to seed data than use a partial-copy sandbox?
The problem with partial-copy sandboxes is that the copied data is mostly random. You can control which objects get copied, but you cannot control what records of a certain object should be copied. This sometimes results in broken references, for example, when a child record is copied without its parent.
There's an idea on the IdeaExchange that proposes a fix for this problem, but I doubt it'll be tackled soon.
So, when data integrity is important to you (which should be almost always), a data-seeding solution is a much better alternative than using a partial copy sandbox.
Something to think about: Given the limitations I described above, when do you think it's ok to use a partial-copy sandbox?
The idea is that it's a sandbox that sits outside the pipeline. When a critical issue is found in production, the hotfix sandbox is refreshed, the issue is reproduced and fixed there, and then pushed back into production, skipping all the other lower environments.
Another pattern is pushing the fix back to UAT and then to Prod.
Regardless of the pattern, I see many issues with this approach:
- If you deploy directly to production, you have no idea if the fix will work alongside future changes that are in the pipeline. In other words, you might deploy something 2 days later and break the fix (and revive the original bug) because the fix was never tested with in-progress changes.
- You now need to do reverse-continuous integration, which means back-promoting the hotfix to lower environments and running all tests to ensure a) in-progress changes still work and b) the hotfix still fixes the issue.
You may find that the in-progress changes are incompatible with the fix.
To me, it seems more correct to fix the issue where it originated (in a lower environment) and integrate that fix with any other in-progress changes in higher environments.
That said, I understand the desire to fix something asap and skip the whole pipeline.
It's like a just-in-time technical debt: you are choosing not to spend time on this right now, but you'll have to fix the mess later.
I brought this up to my LinkedIn network, and the comments were awesome. I highly recommend you spend some time reviewing them; you’ll learn a lot, and it will help you form an opinion.
This is a simple one, but hey, you should remember the basics!
I answered this question in this article and provided some other thoughts.
As I said in question 5, performing post-refresh activities is common, especially on a full sandbox. These can include:
So, what are some ways you can automate this?
I can think of a few ways to tackle this:
So that's it! I hope these sample Salesforce DevOps interview questions helped refresh your Sandbox knowledge (pun intended).
Stay tuned for the next part of the series!
Salto for
Salesforce
Salesforce
SHARE
Pablo Gonzalez
February 12, 2023
11
min read
As Salesforce customers continue to use Salesforce for mission-critical business processes, the need for predictable releases has become more important than ever, leading to an explosion in jobs for Salesforce DevOps professionals and release managers.
As I wrote in SalesforceBen, one big problem is that DevOps skills don't come naturally to most Salesforce developers.
You need highly specialized skills like Git, YAML, Linux commands, etc. This means that an interview for a Salesforce DevOps role is different from your traditional Salesforce developer interview.
In this series, I'll come up with sample questions I would ask if I was interviewing a candidate for a Salesforce DevOps role, and I'll explain each possible answer.
This series will help you crack the Salesforce DevOps interview and land your dream job! Today's topic is Sandbox Management.
I assume you know the basics of sandbox management in Salesforce. These are not basic questions about sandbox types and limitations.
If you are new to managing sandboxes, I recommend the following two articles:
6 types of Salesforce sandbox plus a clever sandbox strategy
How to refresh your Salesforce sandbox—8 best practices
Let's now move on to the questions.
This question is about Sandbox strategy, which is all about how you organize your sandboxes to fit your development process.
Typically, organizations have an Integration sandbox where developers merge their changes, followed by a UAT sandbox where business users can test new features, do training, etc.
So, why not use the same sandbox for both activities? Why not use a full sandbox as both Integration and UAT?
Generally, I would say that an Integration sandbox is a "low-fidelity" environment. This means it's an environment you can't trust to be right 100% of the time. Developers are constantly merging changes, which means that code, flows, and other types of automation are constantly changing. This makes this sandbox very unreliable for end-user testing and training.
In contrast, UAT is meant to be a "high-fidelity" environment. Therefore, we expect changes to go into UAT only after they've passed unit tests and QA testing in the Integration sandbox. It also includes all production data; this makes it better suited for end-user training and testing.
For these reasons, keeping the environments separate and using one sandbox for each purpose makes sense.
Using a developer sandbox is almost always recommended so that developers don't step on each other's work.
If you work in a shared sandbox and your newly created feature doesn't work as expected, you don't immediately know if it's a problem with your implementation or whether another developer changed a related component that your logic depends on. This makes debugging a nightmare.
So, are there scenarios where having multiple developers in the same sandbox is the right choice?
A possible scenario is where some developers work on a specialized area of Salesforce that requires too much setup in the org for it to work correctly.
Let's say you have 3 devs that work on an integration with PagerDuty and Slack. PagerDuty and Slack (in this scenario) don't have sandboxes that everyone can connect to their developer sandbox.
Also, the integration requires the configuration of multiple custom objects in the org. Since data is not included in developer sandboxes, every developer needs to spend time configuring their sandbox to get the integration to work.
In this scenario, it might make sense to have these 3 devs (and no one else!) working on a single sandbox with all the configuration and integration endpoints to work with PagerDuty and Slack. This can reduce the burden on developers to configure their sandbox and bypass the limitation of PagerDuty and Slack not having multiple test environments.
The key is to leave other developers who are not part of this project out of this sandbox.
I know many will disagree with this design, but I've heard about it multiple times.
This is only a sample scenario. The idea of this question is to get you to think, and if you conclude that using a shared sandbox is never a good idea, then that's ok.
Just be 100% prepared to defend your argument and think about how you would solve this problem instead.
Personally, I would rarely recommend it, but I acknowledge there are some scenarios where it might be helpful (in the short term…).
Let's say developer A works on feature A in their developer sandbox. Once the feature is ready, they deploy it to the Integration sandbox (and eventually to UAT and Production).
How can developer B get this feature into their developer sandbox? This feature never went through their sandbox as it is not connected to the developer's A sandbox in the pipeline.
This can become a big deal in the future when developer B is asked to work on a feature that depends on feature A, but the code for A doesn't exist in their sandbox.
What you need here is what's known as a "back-promotion." A back promotion is simply another deployment. The difference is the direction of the deployment is not left-to-right (i.e., dev to Integration) but right-to-left (i.e., from Integration back into the developer's sandbox).
To do a back promotion correctly, you may need specialized DevOps tooling, like Salto. But the concept is simple: just another deployment in the opposite direction of the pipeline.
The Git branching strategy is almost always affected by the number of sandboxes you have.
Let's say you are a very small Salesforce team with only two sandboxes: your developer one and UAT. In this scenario, a simple branching strategy, such as GitFlow, might be perfect. You only work with two branches: development, which represents your in-progress work, and main, which represents your production org.
On the other hand, I've heard of a Salesforce team that only used the main branch. They considered sandboxes as "virtual" branches. They would commit all changes to the main branch and use the same branch to deploy to multiple sandboxes at different stages in the sprint cycle (this is known as trunk-based development).
Finally, if you have multiple sandboxes, such as developer > Integration > QA > UAT and eventually Production, it might make sense to have one branch per sandbox environment. That means one Integration branch, a QA branch, and so forth. This can help you isolate changes across different orgs, where each org has its own "source of truth."
The most common use case to clone a sandbox is to reuse the configuration and data from an existing sandbox. For example, let's say you create a full sandbox for UAT. You then spend days replacing integration endpoints, masking email addresses, configuring custom metadata types, etc., all typical post-refresh activities.
Now, let's say you have another team that requires its own full sandbox for a different project. Rather than creating another full sandbox from Production (and spending days configuring it again), they can simply clone the existing full sandbox. The clone will include all the data and metadata changes from the source sandbox.
The same applies to developer/developer sandboxes. You can create a developer sandbox, make any required configuration changes, and then have all other developers clone that sandbox. This way, everyone is working with the same baseline configuration.
You can also use sandbox cloning to keep a backup of your sandbox before refreshing it.
One of the challenges with developer sandboxes is they don't include any data. I've worked for applications that simply did not work without a huge amount of configuration data in custom objects, which made it really hard to replicate production scenarios.
There are two possible solutions to this problem:
A- Use a data-seeding solution. This can be a third-party solution or a simple script using the SFDX Data Move Utility.
B- Create a partial-copy sandbox instead.
So, when is it better to seed data than use a partial-copy sandbox?
The problem with partial-copy sandboxes is that the copied data is mostly random. You can control which objects get copied, but you cannot control what records of a certain object should be copied. This sometimes results in broken references, for example, when a child record is copied without its parent.
There's an idea on the IdeaExchange that proposes a fix for this problem, but I doubt it'll be tackled soon.
So, when data integrity is important to you (which should be almost always), a data-seeding solution is a much better alternative than using a partial copy sandbox.
Something to think about: Given the limitations I described above, when do you think it's ok to use a partial-copy sandbox?
The idea is that it's a sandbox that sits outside the pipeline. When a critical issue is found in production, the hotfix sandbox is refreshed, the issue is reproduced and fixed there, and then pushed back into production, skipping all the other lower environments.
Another pattern is pushing the fix back to UAT and then to Prod.
Regardless of the pattern, I see many issues with this approach:
- If you deploy directly to production, you have no idea if the fix will work alongside future changes that are in the pipeline. In other words, you might deploy something 2 days later and break the fix (and revive the original bug) because the fix was never tested with in-progress changes.
- You now need to do reverse-continuous integration, which means back-promoting the hotfix to lower environments and running all tests to ensure a) in-progress changes still work and b) the hotfix still fixes the issue.
You may find that the in-progress changes are incompatible with the fix.
To me, it seems more correct to fix the issue where it originated (in a lower environment) and integrate that fix with any other in-progress changes in higher environments.
That said, I understand the desire to fix something asap and skip the whole pipeline.
It's like a just-in-time technical debt: you are choosing not to spend time on this right now, but you'll have to fix the mess later.
I brought this up to my LinkedIn network, and the comments were awesome. I highly recommend you spend some time reviewing them; you’ll learn a lot, and it will help you form an opinion.
This is a simple one, but hey, you should remember the basics!
I answered this question in this article and provided some other thoughts.
As I said in question 5, performing post-refresh activities is common, especially on a full sandbox. These can include:
So, what are some ways you can automate this?
I can think of a few ways to tackle this:
So that's it! I hope these sample Salesforce DevOps interview questions helped refresh your Sandbox knowledge (pun intended).
Stay tuned for the next part of the series!