Introduction
In today’s digital world, no system operates in isolation. Integration is the process of connecting different systems, applications, or databases to enable seamless data sharing and coordinated processes. For businesses using Salesforce, integration is key to creating a unified ecosystem. It ensures that data flows smoothly across platforms. This streamlines operations and enhances customer experiences.
Salesforce integration allows organizations to connect Salesforce with other systems like ERP platforms, marketing tools, or payment gateways. This ensures a single source of truth for business-critical information. Integration empowers businesses by allowing them to sync customer records. It automates workflows and enables real-time data updates. This allows businesses to operate efficiently and at scale.
In this blog, we explore Salesforce integration patterns. We examine their role in connecting systems seamlessly. This enables efficient data sharing and process automation. We’ll discuss different integration architectures like Point-to-Point and Hub-and-Spoke, along with Salesforce’s five core patterns. Through detailed insights into their benefits, limitations, and use cases, you’ll gain detailed insights into their benefits and limitations. This will help you understand how to choose the right pattern for your integration needs.
By the end of this blog, you’ll have a solid understanding of Salesforce integration concepts. This knowledge will make it easier to design solutions that connect Salesforce with other systems seamlessly. Let’s dive in!
What Is Integration?
Integration is the process of linking different computing systems or applications so they can work together as a cohesive unit.
This means connecting software applications, databases, or even devices—whether they’re hosted on-premises or in the cloud. The goal is to allow these systems to exchange data. They should carry out tasks in a coordinated way. This ensures processes run smoothly across the entire ecosystem.
Salesforce integration refers to the process of connecting Salesforce with other systems to allow seamless data sharing and process automation. Integration ensures that information stored in one system is readily accessible and actionable in another, streamlining workflows across platforms.
For example:
- When a customer places an order on an e-commerce website. Salesforce can automatically get the order details. It then creates a record.
- When inventory is updated in the ERP, the changes can be reflected in Salesforce in real time.
Why Use Integration?
In today’s digital landscape, no system operates in isolation. Businesses need integrated systems to:
- Eliminate Data Silos and Enable a 360-Degree View of Customer Information:
Integration consolidates data from multiple systems. This ensures everyone works on accurate and up-to-date information. It provides a unified view of customer interactions, preferences, and history. - Enhance Data Flow and Accuracy:
Seamless data exchange between applications eliminates redundancies, improves accuracy, and ensures all systems remain synchronized. - Automate Workflows and Accelerate Business Processes:
Integration reduces manual effort. It decreases human error and minimizes delays. This is achieved by automating data transfers and event-driven workflows. This enables faster operations like order management or customer support. - Improve Team Collaboration and Efficiency:
Teams gain access to consistent, real-time data across systems, promoting collaboration and eliminating duplicated efforts. - Strengthen Security:
Centralized integration ensures robust security measures to protect sensitive data during transmission and storage. - Accelerate Decision-Making with Real-Time Insights:
Real-time or near-real-time integrations provide up-to-date data, empowering teams to make informed decisions quickly. - Enhance Customer Experience:
A seamless connection across platforms enables businesses to offer customers a smooth and unified journey. - Support Scalability and Reduce Costs:
Integration facilitates faster business growth by interconnecting systems, reducing development overhead, streamlining maintenance, and lowering operational
What is an API?
API stands for Application Programming Interface. It is a set of rules and protocols. These rules and protocols allow two applications to communicate with each other. Think of an API as a bridge that enables data to flow seamlessly between systems, ensuring they work together efficiently.
How does it work?
When you use an app on your phone, the app communicates with a server through an API. This happens when you perform tasks such as checking the weather or ordering food. The API retrieves data from the server and sends it back to the app, presenting it in a user-friendly format. For example, when you search for nearby restaurants in a food delivery app, the API connects to the restaurant database. It retrieves the relevant information and then displays it to you.
Why are APIs important in Salesforce integration?
In Salesforce, APIs enable connections between Salesforce and external systems. APIs are essential for syncing customer data with an ERP. They also push order details to a fulfillment system. Furthermore, they fetch real-time inventory information. APIs are the backbone of these interactions.
There are different types of APIs in Salesforce, such as REST API, SOAP API, Bulk API, and Streaming API. Each serves a unique purpose, which we’ll explore in the ‘Salesforce Integration Capabilities’ section. APIs ensure seamless communication and play a critical role in building efficient and scalable integrations.
What are the Various Salesforce Integration APIs
Salesforce provides below several API tools for integration:
- REST API: Ideal for lightweight, synchronous operations using JSON or XML.
- SOAP API: A more structured protocol, suitable for system-to-system integrations.
- Bulk API: Optimized for large-scale data loads, like migrations.
- Streaming API: Supports real-time notifications using an event-driven model.
- Outbound Messaging: Declarative integration for triggering SOAP messages.
- Salesforce Connect: Enables real-time data virtualization via external objects.
- Heroku Connect: Synchronizes Salesforce with Heroku-hosted applications.
- Web Service Callouts: Allows Salesforce to initiate interactions with external systems.
Types of Integration Architectures
1. Point-to-Point Integration
In this simplest model, two systems communicate directly via custom code or APIs, with no intermediary. There will be a direct connection between two systems where data flows in a one-to-one relationship.
For example – Imagine you have a sales application. It receives new orders information from a external website. It forwards orders to a shipping application, generate invoice from ERP and also track the delivery through an external system. All these are done separately as its own little integration.
- Benefits:
- Simplicity: Straightforward to set up and requires minimal configuration.
- Low Cost: Suitable for small-scale integrations with minimal overhead.
- Direct Communication: Provides tightly coupled connections for immediate data exchange.
- Limitations:
- Scalability Issues: As the number of applications increases, maintaining direct connections becomes challenging (N*(N-1)/2 complexity).
- Lack of Flexibility: Changes in one system often require modifications in connected systems.
- Poor Governance: Limited visibility and control over data flows.
- Use Case:
- Small Integrations: Ideal for scenarios where only two applications need to share data. For example, connect a CRM to a billing system.
- Prototyping: Testing integrations quickly without significant investment in middleware or infrastructure.
2. Hub-and-Spoke Integration
In this model, every system connects only to a central “hub,” rather than to each other directly. The hub acts as the traffic controller. It receives messages from one system. It applies any necessary transformations or routing rules. Then, it forwards them to the appropriate target system. This reduces the number of individual connections you have to manage. It allows you to centralize security, logging, and data mapping in one place.
For example – Your sales website sends new order details to the hub. The hub then:
- Calls the ERP to generate an invoice,
- Sends shipping instructions to the logistics system,
- Updates your analytics database with order metrics.
Each backend system never talks to each other directly. They only speak to the hub. The hub keeps all integrations organized and consistent.
- Benefits:
- Simplified Connectivity: Each application connects to a central hub, reducing the number of direct connections.
- Centralized Management: Easier to govern, secure, and monitor integrations.
- Scalability: New endpoints can be added without disrupting existing connections.
- Data Transformation: Supports centralized data transformation and routing.
- Limitations:
- Single Point of Failure: The hub becomes a critical component, and its failure can disrupt the entire system.
- Initial Investment: Requires substantial setup effort and costs for hub infrastructure.
- Potential Bottlenecks: High traffic can strain the hub, leading to performance issues.
- Use Case:
- Large Enterprises: Connecting multiple systems (e.g., ERP, CRM, e-commerce) in a centralized manner.
- Data Aggregation: Consolidating data from various sources for reporting or analytics.
- Consistency and Monitoring: Scenarios requiring consistent data transformation, routing, and error handling.
3. Enterprise Service Bus (ESB)
An ESB (Enterprise Service Bus) is an advanced middleware layer that extends the hub‑and‑spoke model by providing built‑in content‑based routing, protocol conversion (e.g., SOAP ↔ REST), data transformation (XML ↔ JSON), process orchestration, error handling, and monitoring. It acts like a smart traffic control center. It directs messages efficiently. It can also reshape and reroute messages as needed.
For example – When your website posts an order in XML, the ESB:
- Converts the XML into JSON for your ERP
- Waits for the ERP’s confirmation
- On success, sends a REST call to the shipping API
- If shipping fails, triggers an alert and rolls back any partial work.
The ESB handles protocol differences, transformation, and workflow steps automatically. As a result, each system only needs to know how to talk to the ESB. This simplifies integration at enterprise scale.
Examples of ESB tools for Salesforce integration:
- MuleSoft Anypoint Platform (Salesforce’s own integration solution)
- Jitterbit Harmony
- Dell Boomi AtomSphere
- SAP BTP Integration Suite
Each of these tools provides connectors or adapters specifically designed to streamline integration with Salesforce APIs and data models.
- Benefits:
- Scalability: Easily handles high data volumes and multiple applications.
- Support for Complex Workflows: Facilitates advanced message routing, transformation, and orchestration.
- Loose Coupling: Applications remain independent, making it easier to replace or upgrade individual systems.
- Integration Flexibility: Supports multiple protocols, data formats, and endpoints.
- Limitations:
- High Initial Costs: Setup and maintenance require significant investment in hardware, software, and expertise.
- Complexity: Requires specialized skills for implementation and management.
- Performance Overheads: Improper configurations can lead to delays in data processing.
- Use Case:
- Enterprise-Scale Integrations: Connecting diverse systems in large organizations, such as CRM, ERP, and data warehouses.
- Data Transformation: Handling complex data transformations between systems with varying formats.
- Orchestrated Workflows: Implementing business processes involving multiple systems, such as order fulfillment or customer onboarding.
Type of Integration Data Flow in Salesforce
- Inbound Integration
- Inbound Integration occurs when external systems initiate calls into Salesforce (or any target). They push data or requests via REST/SOAP APIs. This can also be done through Apex web services or platform events.
- You can refer my another blog post for more in-depth knowledge on inbound integration.
- Use Cases:
- Importing Data: Sync customer data from an ERP system to Salesforce.
- Real-Time Updates: External systems push real-time updates like payment confirmation to Salesforce records.
- Third-Party Application Data: Integrate data from external applications like e-commerce or marketing platforms.
- Outbound Integration
- Outbound Integration happens when Salesforce (or another source) sends data or invokes external APIs via Apex callouts, outbound messaging, or platform events
- Use Cases:
- Sending Data to ERP: Push order details from Salesforce to an ERP system.
- Triggering External Actions: Notify an external shipping system when an order is processed in Salesforce.
- Third-Party Service Calls: Integrate Salesforce with payment gateways, email services, or social platforms.
Types of Integration Timing in Salesforce: Synchronous vs. Asynchronous
When designing integrations in Salesforce, it is essential to understand the timing mechanisms involved. Salesforce supports two primary types of integration timing: Synchronous and Asynchronous. Each approach is suited for specific business requirements based on immediacy, complexity, and volume of data.
Synchronous Integration
Synchronous integration is a real-time communication mechanism. In it, the client sends a request and waits for an immediate response from the server before proceeding. This type of integration ensures that the process is completed end-to-end within a single transaction.
Key Characteristics
- Real-Time Processing: Operations are processed instantly, making it suitable for scenarios requiring immediate feedback.
- Request and Response: A client sends a request and waits until the server processes it and returns a response.
- Tight Coupling: Both systems involved in the integration must be available at the same time.
- Error Handling: Errors are usually reported instantly and must be handled in real-time.
Common Use Cases
- Fetching live data for user interfaces, such as displaying account details in an external system.
- Real-time payment processing systems.
- Validation scenarios where immediate feedback is necessary.
Some of the Examples in Salesforce
- Apex Callouts: Synchronous REST or SOAP callouts made from Apex classes in Salesforce.
- Visualforce or LWC Components: Making real-time external APIs to display data dynamically.
Asynchronous Integration
Asynchronous integration is a deferred communication mechanism. The client sends a request to the server. Then, the client proceeds with other tasks without waiting for an immediate response. The response, if any, is processed at a later time.
Key Characteristics
- Deferred Processing: Operations are queued and processed independently of the requesting system.
- No Immediate Response: The client does not block execution while waiting for a response.
- Loose Coupling: Systems do not need to be available simultaneously.
- Scalability: Handles large volumes of data more effectively than synchronous methods.
Common Use Cases
- Data synchronization between systems, such as batch processing or data migrations.
- Long-running processes like order fulfillment or data enrichment.
- Fire-and-forget scenarios where acknowledgment is not immediately needed.
Some of the Examples in Salesforce
- Platform Events: Used for publishing and subscribing to events asynchronously.
- Outbound Messages: Automate notifications to external systems without waiting for responses.
- Apex Queueable and Batch Classes: Handle heavy or complex processing in smaller chunks.
Types of Integration Patterns Categories in Salesforce
Integration patterns can be grouped into three main categories:
- Data Integration
- Focuses on keeping data in two or more systems synchronized so that they are always up-to-date and precise.
- It’s usually the easiest type of integration to implement. Yet, it requires good data management practices. These practices guarantee it remains efficient and cost-effective over time.
- Key techniques include managing duplicate records, designing proper data flows, and applying master data management (MDM) principles.
- Example MDM System → Salesforce: A nightly ETL job extracts updated customer records. These records come from the Master Data Management system. The job then upsert them into Salesforce. This process ensures customer details stay current.
- Process Integration
- Used when a business process requires multiple systems to work together to finish a task.
- One system act as the “controller” (orchestrating the process), or all systems collaborate without a central controller (choreography).
- These integrations are more complex, often requiring robust design, testing, and error-handling mechanisms. They also involve long-running transactions and process monitoring capabilities.
- Example SAP → Salesforce → SAP: A new order is created in SAP. It’s sent to Salesforce via API. Salesforce processes payment and shipping through external services. Finally, it sends the payment and shipping details back to SAP. This is done for invoicing of the order via a REST call to complete the end‑to‑end process flow.
- Virtual Integration
- Allows users to access, search, and interact with data stored in an external system. This happens in real-time without copying the data into Salesforce.
- This approach eliminates the need for data replication and ensures users always see the most current information.
- Example External Inventory System → Salesforce (via Salesforce Connect): Salesforce Connect uses the OData adapter. It maps external inventory tables into Salesforce as external objects. This enables users to view and update live stock levels without replicating data locally.
Key Considerations
Choosing the right integration approach depends on factors like:
- The capabilities of the systems involved.
- The volume of data to be managed.
- How errors are handled during integration.
- Whether transactions need to be fully managed across systems.
Each pattern serves a specific purpose, and selecting the best one requires evaluating your business needs and the tools available.
Refer to the table below for pattern consideration. This is based on scenarios where another system integrates with Salesforce, or Salesforce is integrated with another system.
| Scenario | Type | Timing | Key Pattern to Consider |
|---|---|---|---|
| Another System → Salesforce | Process Integration | Synchronous | Remote Call-In |
| Process Integration | Asynchronous | Remote Call-In | |
| Data Integration | Synchronous | Remote Call-In | |
| Data Integration | Asynchronous | Batch Data Synchronization | |
| Salesforce → Another System | Process Integration | Synchronous | Remote Process Invocation—Request and Reply |
| Process Integration | Asynchronous | Remote Process Invocation—Fire and Forget | |
| Data Integration | Synchronous | Remote Process Invocation—Request and Reply | |
| Data Integration | Asynchronous | UI Update Based on Data Changes | |
| Virtual Integration | Synchronous | Data Virtualization |
Types of Integration Patterns in Salesforce
1. Remote Process Invocation—Request and Reply
This integration pattern enables Salesforce to send a request to an external system to perform a specific task. Salesforce then waits synchronously for the external system to complete the task and send back a response. Once the response is received, Salesforce updates its records based on the information returned.
For example, an event occurs in Salesforce, like creating an order. Salesforce sends the details to an external system. Salesforce waits for the external system to processes the order. Once completed, the external system responds with information (like an order ID or status), which Salesforce updates in real-time.
Benefits
- Immediate feedback: Users or processes receive the response (success/failure) right away, enabling real-time decision making .
- Simplified error handling: Errors can be surfaced directly to the caller, allowing immediate retries or rollbacks.
Limitations
- Timeout and governor limits: Apex callouts must complete within a 120-second timeout and count against synchronous governor limits.
- Low throughput: Suitable only for small volumes of data or low user concurrency, since each call blocks until completion .
Example Use Cases
- Payment authorization: Submitting a credit-card transaction to a gateway and immediately receiving approval or decline.
- Order creation: Creating an order in an ERP and returning an order number for display in Salesforce.
When to Use the Remote Process Invocation—Request and Reply Pattern
This pattern is ideal for scenarios where Salesforce needs to communicate with an external system in real-time. It waits for a response before completing the transaction.
Here are key considerations to determine if this pattern is suitable for your integration:
- Does Salesforce need to wait for a response?
- Use this pattern when Salesforce must pause processing until the external system provides a reply.
- Suitable for synchronous request-reply scenarios rather than asynchronous requests.
- Does the response need to be part of the same transaction?
- If the response from the external system needs to be processed within the same transaction, this pattern is ideal. It effectively handles calls that need to be processed immediately.
- Is the message size small?
- This pattern works well for small-sized messages that fall within Salesforce’s governor limits.
- What triggers the integration?
- Use this pattern for user-initiated actions. Examples include a button click in the Salesforce UI. It is also useful for specific events where immediate feedback is required.
- Can the external system respond quickly?
- Ensure the external system can provide a low-latency response to avoid timeouts.
- Consider the expected number of concurrent users or transactions during peak periods to assess system performance.
By addressing these considerations, you can determine whether the Remote Process Invocation—Request and Reply pattern aligns with your integration needs. This ensures seamless communication between Salesforce and external systems.
Solution Approaches
| Solution (Fit) | Details |
|---|---|
| Enhanced External Services (Declarative REST API in Flows) (Best) | • No code required—invoke RESTful services defined in OpenAPI 2.0 JSON schema. • Works when request/response use primitive types (boolean, datetime, integer, string, arrays) or simple nested objects. • Can be triggered directly from a Flow. |
| Lightning Component / Visualforce Page with Apex Callout (Best) | • User clicks a button or performs an action in Lightning or Classic UI. • Apex consumes a WSDL (SOAP) or makes HTTP callouts (GET/POST/PUT/DELETE). • For potentially slow endpoints, use asynchronous continuations to avoid hitting synchronous governor limits. |
| Apex Trigger with Asynchronous Callout (Suboptimal) | • Trigger fires on record changes and launches an @future or Queueable callout. • Must run asynchronously, so Salesforce can’t update records in the same transaction based on the response. • Better suited for “fire-and-forget” scenarios rather than real‑time request‑reply. |
| Batch Apex Job with Synchronous Callouts (Suboptimal) | • Useful for processing large datasets in batches (e.g., 200 records per execution). • Each batch chunk can make callouts and handle responses. • Not designed for immediate user feedback—best for bulk or off‑peak operations. • Governed by batch and callout limits per transaction. |
Important Considerations
- Timeliness:
- Calls must complete quickly (within 120 seconds). Use continuations for long-running transactions.
- Error Handling and Recovery:
- Handle errors gracefully by logging failures and allowing retries. Data updates should occur only after successful responses.
- Idempotency:
- Ensure repeated requests do not lead to duplicate records. Use unique identifiers in Salesforce and the external system to manage transactions.
- Security:
- Use SSL/TLS for secure communication. Authenticate requests and validate responses to maintain data integrity.
Example 1 – Using External Service Invoking a Rest Web Service.
A company wants to integrate Salesforce with their shipment tracking system to track a order shipment. The shipment tracking system provides a RESTful web service that adheres to the OpenAPI 2.0 specification. This service returns real-time tracking details for a given shipment ID in JSON format.
The solution works as follows:
- Uploading the OpenAPI 2.0 specification file. You can also provide its URL. Use Salesforce’s External Services feature to generate Apex actions corresponding to the REST endpoints defined in the OpenAPI file.
- Creating a Screen Flow to allow users to input the shipment ID. Configure the Flow to invoke the generated Apex actions. These actions will call the REST web service.
- Rendering the shipment’s status, current location, and estimated delivery time is managed by the REST service. The response is returned in JSON format. The response fields are mapped to variables in the Flow for real-time display.
- Embedding the Flow in a Lightning App Builder page. This enables users to retrieve shipment tracking details directly from Salesforce in real-time.
Note – OpenAPI 2.0, also known as Swagger, is a specification for RESTful APIs. It defines a standard way to describe an API’s endpoints. It also details request/response formats, authentication, and more. It provides a machine-readable contract for developers to interact with the service seamlessly, enabling integration and automation.
Example 2 – Using Lightning Component / VF Page Calling External SOAP Web Service.
A company wants to integrate Salesforce with their billing system to display a customer’s billing history in Salesforce. The billing system exposes their SOAP-based web service that accepts the account number. This service returns a list of bills and details for a given account in XML format.
The solution works as follows:
- Consuming the billing service WSDL in Salesforce to generate an Apex proxy class for interacting with the SOAP-based web service.
- Creating a Lightning component or Visualforce page. Use it to initiate the Apex callout. Pass the account number to the SOAP web service.
- Rendering the customer’s billing history is done in real-time. This includes a list of bills and their details. The external system returns these details in XML format.
Example 3 – Using Lightning Component / VF Page External REST Web Service.
A company wants to integrate Salesforce with a payment gateway (e.g., PayPal or Authorize.Net) to process customer payments directly from Salesforce. The payment gateway exposes a RESTful API that accepts payment details (card number, expiry, amount, etc.). This service returns a JSON payload with transaction status, authorization code, and any error messages.
The solution works as follows:
- Building a custom Apex callout class to construct and send an HTTP POST request. Send this request to the payment-gateway’s REST endpoint. Pass merchant credentials and the customer’s payment information with the request. Finally, parse the JSON response into Apex objects.
- Creating a Lightning component or Visualforce page with a secure payment form. This form invokes the Apex callout when the user submits their details. It handles both success and error callbacks.
- Rendering the transaction outcome directly in the component in real time. Examples include “Payment Approved,” “Authorization Code: 1A2B3C,” or a specific decline reason. Log the response in a custom Salesforce object for audit and reconciliation.
Refer Salesforce Documentation for more details – Remote Process Invocation—Request and Reply
2. Remote Process Invocation—Fire and Forget
This integration pattern allows Salesforce to send a request to an external system to perform a task. Salesforce does not wait for the task to be completed. Instead, the request is processed asynchronously. The external system handles the task independently. Meanwhile, Salesforce continues its other operations without delay.
For example, an event occurs in Salesforce, like creating a new lead. Salesforce sends the lead information to an external marketing platform for an email campaign enrollment. Salesforce does not wait for the marketing platform to confirm the lead’s enrollment. Instead, it continues processing other tasks. Later, the marketing platform updates Salesforce with the enrollment status. This update allows Salesforce to update the lead record asynchronously with relevant information. This ensures a seamless workflow without delays in Salesforce operations.
- Benefits
- High responsiveness: The calling transaction returns immediately, improving user experience on long-running operations .
- Decoupling: Loose coupling between systems, since Salesforce doesn’t depend on the remote system’s processing time.
- Limitations
- Delayed error visibility: Failures in the downstream process must be handled via callbacks or retry mechanisms, which adds complexity .
- Guaranteed delivery concerns: Additional infrastructure (platform events, outbound messaging) or custom retry logic is needed for end-to-end reliability.
Use Cases
- Email notifications: Publishing a platform event when a record is created, with an external service subscribed to send emails.
- Order processing: Sending order details to a middleware queue for later processing, without making the user wait.
When to Use the Remote Process Invocation—Fire and Forget
This pattern is ideal for scenarios where Salesforce needs to send information to an external system. Salesforce does not need to wait for a response. It allows Salesforce to continue processing independently of the external system’s response time.
Here are key considerations to determine if this pattern is suitable for your integration:
- Does Salesforce need to wait for a response?
- Use this pattern when Salesforce does not need to pause processing for an acknowledgment or reply from the external system.
- Suitable for asynchronous “fire-and-forget” scenarios rather than synchronous request‑reply interactions.
- Is the response required in the same transaction?
- If there’s no need to process the external system’s response as part of the initiating transaction, this pattern works well. It’s a good fit for such situations.
- Decoupling the call and response helps avoid locking Salesforce resources while waiting.
- Is the message size small?
- This pattern works well for small-sized messages that fall within Salesforce’s governor limits..
- What triggers the integration?
- Ideal for event-driven scenarios—record inserts/updates or batch jobs—where real-time acknowledgment isn’t critical.
- Also works for user‑initiated actions when immediate feedback isn’t required.
- Is guaranteed delivery a concern?
- Use middleware, persistent queues, or outbound messaging to ensure messages aren’t lost.
- Implement retry or dead‑letter strategies to handle transient failures.
- Can the remote endpoint support a contract-first integration?
- In solutions like outbound messaging, Salesforce defines a WSDL contract that the remote endpoint must implement.
- Ensure the external system can adhere to the contract specified by Salesforce.
- Can the remote endpoint support long polling or streaming?
- If you leverage platform events, ensure the endpoint or ESB can subscribe (e.g., via CometD) and replay missed events.
- Long polling capabilities improve reliability and timeliness of message consumption.Ensure the external system can adhere to the contract specified by Salesforce.
- Are declarative methods preferred over custom Apex?
- Favor solutions like platform events or workflow‑driven outbound messaging to reduce code maintenance.
- Use Apex callouts only when declarative options can’t satisfy your requirements.
By addressing these considerations, you can identify whether the Remote Process Invocation—Request and Reply pattern aligns with your integration needs. This ensures seamless communication between Salesforce and external systems.
Solution Approaches
| Solution (Fit) | Details |
|---|---|
| Process-Driven Platform Events (Best) | • No code needed—automatically publish events on record insert/update. • Multiple subscribers (Salesforce or external) can listen and act. • Ideal for notifying external systems in real time. |
| Customization-Driven Platform Events (Good) | • Use Apex (triggers or classes) to publish events • Subscribers (Apex triggers or external) receive and handle messages. • Offers more control over when and how events are published. |
| Flow / Workflow-Driven Outbound Messaging (Good) | • Declaratively send SOAP messages on record changes. • Guaranteed delivery with retries if remote system doesn’t acknowledge. • Best for simple, reliable “fire-and-forget” SOAP integrations. |
| Outbound Messaging with Callbacks (Good) | • Adds a callback step so Salesforce can fetch related data after the initial message. • Ensures idempotency and handles multi-object data retrieval. • Uses SessionId from the outbound message for secure callback authentication. |
| Custom Lightning/VF Component → Async Callout (Suboptimal) | • User-initiated UI action triggers an asynchronous Apex callout. • Requires custom code for guaranteed delivery and error handling. • Better suited for non-critical or low-volume async tasks. |
| Apex Trigger → Async Callout (Suboptimal) | • Trigger on record changes launches an async callout (@future or Queueable) • Cannot update same record in the same transaction • More error handling needed—better for non‑real‑time needs |
| Batch Apex → Async Callout (Suboptimal) | • Batch job processes records in chunks and makes async callouts • Handles large volumes but not for real-time use • Limited by callout limits per batch—best for scheduled bulk operations |
Important Considerations
Include a unique transaction or event ID in the payload. The subscriber should check for its existence before acting. This approach prevents creating duplicate records.
- Timeliness:
- Ensure events or messages are delivered with minimal delay. Use a low‑latency delivery mechanism (e.g., Platform Events or Streaming API) so subscribers receive notifications in near real‑time.
- Error Handling and Recovery:
- Any robust integration must include both proactive error handling and a clear recovery path tailored to the chosen delivery mechanism:
- Platform Events:
- Error handling: Events are fire‑and‑forget; any delivery or processing errors are the responsibility of the subscriber. Because events aren’t wrapped in database transactions, publishes can’t be rolled back.
- Recovery: Consumers can replay from a given replay ID (which increments atomically per event) for up to 72 hours. Expose a “replay from last success” control so operators can resume processing where it left off.
- Outbound Messaging:
- Error handling: Salesforce itself retries unacknowledged deliveries for up to 24 hours. It starts at 15 sec intervals, doubling up to 60 min.After 24 hours, failed messages land in the Outbound Messaging queue.
- Recovery: Admins must monitor that queue. They should manually kick off retries for any messages older than 24 hours. Alternatively, they can request an extension to 7 days via Salesforce Support.
- Apex Callouts
- Error handling: Catch callout exceptions (timeouts, 5xx responses) immediately. Beyond that, Salesforce hands off to the remote system, so any downstream failures aren’t visible to the caller.
- Recovery: If you require guaranteed delivery, build a custom retry queue in Salesforce (e.g. custom object + scheduled Apex) to re‑invoke failed callouts according to your SLA.
- Platform Events:
- Any robust integration must include both proactive error handling and a clear recovery path tailored to the chosen delivery mechanism:
- Idempotency:
- Design consumers to tolerate duplicate deliveries.Include a unique transaction or event ID in the payload. The subscriber should check for its existence before acting. This approach prevents creating duplicate records.
- Outbound Messaging & Platform Events both ship a stable ID across retries. This ID can be the outbound message ID or the platform event’s replay ID. Leverage that to filter duplicates.
- Apex Callouts need you to generate and send your own GUID per request. This is important if you want retry-safe semantics. Store that GUID with each remote change. This ensures that repeated callouts don’t create duplicate side‑effects.
- Design consumers to tolerate duplicate deliveries.Include a unique transaction or event ID in the payload. The subscriber should check for its existence before acting. This approach prevents creating duplicate records.
- Security:
- Transmit all messages over HTTPS/TLS. Authenticate publishers and subscribers (Oauth, certificates) and validate incoming payloads against schemas or signatures to prevent tampering. Enforce CRUD/FLS and event‑entity permissions in Salesforce.
Example 1 – Using Platform Events (Declarative Fire‑and‑Forget).
A retailer needs to notify its external shipping partner as soon as an Order is created in Salesforce. The shipping partner subscribes to a custom Platform Event channel that accepts Order ID and shipment details.
The solution works as follows:
- Defining a Platform Event: Create a
Order_Shipment__eevent with fields likeOrderId__c,ShipDate__c, andCarrier__c. - Publishing Declaratively: In the Order After‑Save Record‑Triggered Flow, add a “Publish Platform Event” element. Then, map the Order’s ID and shipment info.
- External Subscriber Actions: The shipping system (or middleware) subscribes to the Platform Event stream via CometD or EMP Connector. It processes each event as it arrives. No response is required.
Example 2 – Customization‑Driven Platform Events.
A global HR system needs to kick off multiple downstream processes when a new employee record is created in Salesforce.
The solution works as follows:
- Defining a Platform Event: Create a
event with fields likeEmployee__eEmployeeId__c,StartDate__c, andDepartment__c. - Publishing via Apex: Use an
after inserttrigger onEmployee__c. Call an Apex helper class to publish aNew_Employee__ePlatform Event with employee details. - External Subscriber Actions: A middleware service (e.g., MuleSoft) subscribes to the
New_Employee__echannel to provision accounts in Active Directory and notify payroll systems—no hardcoded scheduling required.
Example 3 – Using Flow / Workflow‑Driven Outbound Messaging.
An insurance provider wants to send new Claim records to its legacy claims‑processing system without waiting for a response. The legacy system exposes a SOAP endpoint.
The solution works as follows:
- Configuring an Outbound Message: In Setup, define an Outbound Message on the Claim object. Include fields like ClaimNumber, PolicyHolder, and ClaimAmount. Ensure it points to the SOAP endpoint URL.
- Addding Flow / Workflow Rule: Create a Flow / Workflow that fires when a Claim is created. It also triggers when a Claim’s status changes to “Submitted”.
- Automatic Delivery: Salesforce sends the SOAP message asynchronously; the legacy system acknowledges receipt with a minimal SOAP ACK. Salesforce retries delivery until it succeeds, but no further processing occurs in real time.
Refer Salesforce Documentation for more details – Remote Process Invocation—Fire and Forget
3. Batch Data Synchronization
This integration pattern enables Salesforce and external systems to periodically exchange large volumes of data in bulk. Salesforce extracts, transforms, and loads data to or from an external system according to a scheduled or triggered process. The synchronization runs asynchronously—often during off‑peak windows—so that both systems stay up‑to‑date without impacting real‑time operations in Salesforce.
For example, at the end of each day, Salesforce exports all newly created customer records. These records are sent in bulk to an external data warehouse. Overnight, the warehouse processes the data and returns any updates—such as enriched customer attributes or status changes—back into Salesforce. This scheduled, asynchronous exchange ensures high‑volume data consistency while preserving Salesforce’s performance for real‑time users.
Benefits
- Large data volumes: Ideal for processing millions of records efficiently using Bulk API and ETL tools.
- Off-peak processing: Runs can be scheduled during low-usage periods to minimize impact on system performance and users.
- Data consistency: Ensures both Salesforce and external systems stay synchronized with up-to-date information.
- Flexibility: Supports complex data transformations and validations during the synchronization process.
Limitations
- Not real-time: Data updates occur on a schedule, so changes aren’t reflected immediately.
- Complexity: Requires setup and maintenance of ETL processes or integration middleware.
- Error handling: Failures in batch jobs may require manual intervention or retries.
- Resource consumption: Large batch jobs can consume significant system resources and API limits.
When to Use the Batch Data Synchronization
This pattern is ideal for scenarios where data needs to regularly move into Salesforce. It is also suitable when data needs to move out of Salesforce. This can be done without disrupting peak‑time user activity. It lets you offload heavy data transfers to controlled batch windows.
Here are key considerations to determine if this pattern is suitable for your integration:
- Should the data reside in Salesforce?
- If not, consider real‑time mashup patterns or external reporting tools instead of bulk storage.
- If yes, you’ll need a reliable mechanism to keep Salesforce data current.
- When should data be refreshed?
- Event‑driven refresh:
- Trigger updates when the source system emits a change event.
- Good for near‑real‑time consistency but generate many small batches.
- Scheduled refresh:
- Run large imports/exports during off‑peak hours (nightly or weekly).
- Minimizes impact on end‑user performance.
- Event‑driven refresh:
- Is this data critical to core business processes?
- If data drives UI workflows or transactional logic, prioritize faster, more frequent syncs.
- If it’s used primarily for archival or reference, longer intervals can suffice.
- Are there reporting or analytics requirements?
- If dashboards and reports depend on up‑to‑date data, ensure sync cadence aligns with reporting needs.
- For historical analysis, consider incremental updates to minimize data movement.
By addressing these considerations, you can determine whether a Batch Data Synchronization approach meets your integration requirements. This ensures reliable data availability in Salesforce while minimizing performance impacts on end users.
Solution Approaches
| Solution (Fit) | Data Master | Details |
|---|---|---|
| Salesforce Change Data Capture (Best) | Salesforce | • Publishes near-real-time events for record create, update, delete, and undelete. • Ideal when Salesforce is the source of truth. • External app subscribes and applies deltas automatically. |
| Replication via ETL Tool (Source‑Driven) (Best) | Remote System | • Third‑party ETL watches the external system for changes. • Transforms data and uses Bulk or SOAP API to update Salesforce. • Great for keeping Salesforce in sync with another master system. |
| Replication via ETL Tool (Salesforce‑Driven) (Best) | Salesforce | • ETL tool polls Salesforce (via SOQL, getUpdated, or SOAP API) for changed records. • Useful when Salesforce data drives external systems. • Works on schedule or near real-time. |
| Remote Call‑In (Suboptimal) | Remote System | • External system calls Salesforce APIs to push updates. • Can cause heavy, constant traffic. • Requires robust error handling and record locking to avoid performance issues. |
| Data Master – Salesforce Remote Process Invocation (Suboptimal) | Salesforce | • Salesforce calls external APIs on each change. • Leads to frequent two‑way traffic. • Needs strong error handling and locking to prevent performance degradation. |
Important Considerations
- Timeliness:
- Batch jobs must complete within a designated window (often outside business hours) to avoid contention with interactive users.
- Monitor job runtimes. Segment large data sets to ensure all batches finish on schedule. For example, divide them by record type or region.
- Error Handling and Recovery:
- Any robust batch‑data synchronization must include both proactive error handling and a clear recovery path tailored to each step of the pipeline:
- Read from Salesforce using Change Data Capture
- Error handling: All CDC events are handed off asynchronously. Your ETL or subscriber must catch and log any exceptions during event processing. Because CDC events aren’t tied to a Salesforce transaction, they cannot be rolled back once published.
- Recovery: Leverage the CDC replay ID to replay the event stream from the last successfully processed event. Retain CDC messages for up to 72 hours; expose a “replay from last checkpoint” option in your client.
- Read from Salesforce using a 3rd‑party ETL system
- Error handling: For transient read errors (e.g. network hiccups, timeouts), implement automatic retries with exponential back‑off. For persistent failures, log details to an error/control table with:
- batch identifier
- error timestamp
- API response code and message
- Recovery: Provide an “Rerun failed extract batches” mechanism. On rerun, the ETL should re‑query only the failed record set or the full batch, depending on SLA. Optionally allow a delayed restart to give data stewards time to correct source data.
- Error handling: For transient read errors (e.g. network hiccups, timeouts), implement automatic retries with exponential back‑off. For persistent failures, log details to an error/control table with:
- Write to Salesforce
- Error handling: Salesforce bulk or REST API calls return per‑record results including:
- record identifier(s)
- success/failure flag
- list of field‑level errors
Parse these results and persist failures in a write‑error table.
- Recovery: Surface a “Retry failed writes” function that re‑submits only those records that errored. For catastrophic failures (e.g., org‑wide lock errors), allow an immediate batch rerun or a delayed retry window to avoid contention.
- Error handling: Salesforce bulk or REST API calls return per‑record results including:
- External master system
- Error handling: Defer to the master system’s own error‑handling best practices (e.g., transactional rollbacks, poison‑message queues). Ensure your sync logic traps and logs any HTTP or protocol errors when interacting with the external system.
- Recovery: Where supported, leverage the master’s native retry or replay mechanisms (e.g., message replay, idempotent endpoints). If unavailable, provide a manual “Sync missing records” report that operators can use to trigger a backfill.
- Log errors with enough context (record IDs, error messages) to enable targeted retries of only the failed subsets.
- Provide alerts or dashboards so administrators can intervene when repeated failures occur.
- Idempotency:
- Use stable, unique identifiers in both systems. These can be external IDs or surrogate keys. This practice prevents duplicate inserts or updates when jobs are re‑run.
- Design your ETL mappings so that unchanged records are upserted without side effects, and only changed records are processed.
- Security:
- Use a dedicated “API Only” or integration user with least‑privilege permissions for ETL access.
- Always connect over HTTPS and, where possible, enforce mutual TLS or IP whitelisting.
- Encrypt sensitive fields at rest and in transit (for example, via Shield Platform Encryption).
Example 1 – Using Salesforce Change Data Capture (Real‑Time Replication).
A financial services firm needs to keep its external analytics warehouse continuously synchronized with Salesforce Account and Opportunity data. They choose Salesforce Change Data Capture (CDC) to push record‑level deltas as they happen.
The solution works as follows:
- Enabling Change Data Capture in Salesforce
- In Setup → Change Data Capture, select the Account, Opportunity. Salesforce will now publish a change event whenever a selected record is created, updated, deleted, or undeleted.
- Integration App Subscribes to CDC Channels
- A lightweight Node.js (or Java/.NET) service runs outside Salesforce. It connects via the EmpConnector (CometD over Streaming API) to the
/data/AccountChangeEventand/data/OpportunityChangeEventchannels. - The connector negotiates a replay ID on startup, so it can resume from the last‑processed event in case of downtime.
- A lightweight Node.js (or Java/.NET) service runs outside Salesforce. It connects via the EmpConnector (CometD over Streaming API) to the
- Process Incoming Events
- Each event payload carries:
ChangeEventHeader.recordIds(the Salesforce record IDs)ChangeEventHeader.changeType(CREATE,UPDATE,DELETE,UNDELETE)ChangeEventHeader.replayId(monotonic sequence)- The union of all changed fields and their new values.
- The service maps these fields to the external schema (e.g.,
AccountId → account_id) and builds a batch upsert or delete payload for the analytics warehouse.
- Each event payload carries:
- Upsert into the External Data Store
- Using the target system’s bulk API (e.g., Snowflake’s Snowpipe REST endpoint, Redshift COPY, or a custom database connector), the service pushes only the delta rows.
- For deletes, it marks or removes the corresponding rows based on the Salesforce record ID.
- Monitoring, Recovery & Idempotency
- Error logging: Any failed batch is written to a local “failed_events” queue with its replay ID.
- Replay capability: On restart, the service reconnects with the lowest unacknowledged replay ID. This can also occur on operator demand via a “replay failures” button. It then re-consumes events from that point. Salesforce retains CDC events for 72 hours.
- Deduplication: Events can be replayed, so the service checks its own processed_replay_ids table before acting on each one. This ensures true idempotency even if the same event arrives twice.
Example 2 – Using Change Data Capture In External System and ETL.
A utility company runs a nightly mainframe batch that assigns prospects to sales reps and teams. These assignment updates must be imported into Salesforce each night. The customer uses a commercial ETL tool. It subscribes to Change Data Capture (CDC) on the source tables. Then, it pushes changes into Salesforce.The solution involves:
- Configuring CDC on the source database
- Modern relational databases—including Db2 on z/OS, Oracle, SQL Server, PostgreSQL, etc.—maintain a transaction log or “journal” of every row insert/update/delete (CDC).
- Enable Change Data Capture for the
Prospect_Assignmentstable (or equivalent) in the mainframe’s upstream data store. - The CDC feed emits a change record whenever a prospect’s owner or team assignment is inserted or updated.
- Scheduling the mainframe batch
- A cron‑like scheduler triggers the nightly batch job (e.g., at 2 AM) that computes and writes the new prospect assignments into the source table.
- Once the batch completes, CDC change entries accumulate in the CDC log.
- ETL tool picks up CDC events
- The ETL connector hooks into the log journal. It reads the change records as they’re committed—no triggers, no code changes to your batch job.
- Shortly after the batch job finishes, the ETL connector polls or subscribes to the CDC stream. It retrieves all change records for the
Prospect_Assignmentstable. - The connector collates inserts and updates into a single “delta” payload.
- Transforming & Loading into Salesforce
- The ETL tool maps each change record to the corresponding Salesforce object (e.g., setting
OwnerIdandTeam__con theLeadorProspect__crecord). - It then uses the Salesforce SOAP or Bulk API to upsert by Prospect external ID, pushing only the changed fields.
- The ETL tool maps each change record to the corresponding Salesforce object (e.g., setting
- Monitoring & Recovery
- The ETL tool logs success/failure for each record. Any failed rows are written to an error table.
- An operations dashboard displays nightly batch status; administrators can click “Retry Failed Records” to re‑submit only the errored subset.
Refer Salesforce Documentation for more details – Batch Data Synchronization
4. Remote Call-In
This integration pattern allows external systems to invoke Salesforce logic and data operations. They can do so via SOAP or REST APIs, including custom Apex REST services. These systems can create, read, update, or delete records. When an external application needs to push or pull data, it sends an API request to Salesforce endpoints. These endpoints then execute triggers, validations, flows, or Apex controllers as configured.
For example, an e‑commerce platform pushes new orders into Salesforce. When an order is placed, the platform calls a custom Apex REST service in Salesforce. It passes order details in a JSON payload. Salesforce’s API layer authenticates the request. It invokes the Apex controller to create Order and Order Item records. It then returns a confirmation response. This real‑time, event‑driven interaction ensures that Salesforce stays synchronized with external systems without user intervention.External systems invoke Salesforce processes via SOAP/REST/Apex APIs to create, read, update, or delete data
Benefits
- Centralized logic: Business rules and validation reside in Salesforce, ensuring consistency regardless of the caller.
- Flexible access: Multiple API types (SOAP, REST, Bulk, etc.) allow optimal integration for different payload sizes and security needs
Limitations
- API limits: Limited to per-org API call quotas and concurrent request considerations.
- Network dependency: External callers must manage connectivity, authentication, and retry logic for resilience.
Use Cases
- Legacy system sync: An on-premises ERP system calls Salesforce REST API to update order statuses in real time.
- Microservice orchestration: A middleware service invokes an Apex REST method to trigger a composite business process.
When to Use the Remote Call-In Pattern
This pattern is ideal for scenarios where an external system needs to connect to Salesforce. The connection could be for notifying Salesforce of events, creating new records, or updating existing records. It supports standard synchronous request-reply interactions, such as over SOAP or REST. The remote process can discard the response if immediate feedback isn’t required.
Here are key considerations to determine if this pattern is suitable for your integration:
- What is the primary purpose of the remote call?
- Event notification: Use a decoupled, event-driven approach when the remote system simply needs to inform Salesforce of external events.
- CRUD operations: Use direct API calls for CRUD actions. This is necessary when the remote system must create, read, update, or delete Salesforce records.
- Does the remote process need to wait for a response?
- Remote calls to Salesforce are inherently synchronous request-reply over HTTP.
- If the remote process does not require the response, it can issue the call. The process can ignore the returned payload to simulate an asynchronous flow.
- What object scope does each transaction involve?
- Single-object transactions are simpler and often preferred for performance and error isolation.
- Multi-object or related-record operations require composite or bulk API calls to maintain consistency.
- Which message format will you use?
- SOAP: Choose SOAP when you need strict contract-first integrations with a predefined WSDL supplied by Salesforce.
- REST: Opt for REST when you prefer lightweight, flexible payloads and simpler call semantics.
- Is the message size small or large?
- For small payloads (typical CRUD operations), REST or SOAP over HTTP works well.
- For larger data volumes, consider the Bulk API or chunking the payload to avoid timeouts.
- Can the remote system support a contract-first approach?
- If using Salesforce’s SOAP API, the remote system must implement the WSDL contract provided by Salesforce.
- REST integrations do not require a formal WSDL but should still adhere to agreed JSON schemas.
- Is transaction processing required?
- Use composite or transaction-specific APIs (like the Composite API) when multiple records must be created or updated atomically.
- For best-effort operations, simple single-call APIs suffice.
- How tolerant are you of customization in Salesforce?
- If you prefer minimal configuration, leverage out-of-the-box API endpoints.
- For complex routing, validation, or enrichment, you need Apex REST/SOAP services or triggers to process incoming calls.
By addressing these considerations, you can determine whether the Remote Call-In pattern meets your integration requirements. This ensures secure, reliable, and performant communication from external systems into Salesforce.
| Solution (Fit) | Details |
|---|---|
| SOAP API (Best for structured, enterprise integrations) | • Access via generated WSDL (Enterprise or Partner). • Supports query, create, update, delete, metadata, and admin calls. • Synchronous only—client waits for response. It respects user‑level security and sharing. It has partial‑success or “all‑or‑nothing” transaction modes. Use Bulk API 2.0 for >2,000 records. |
| REST API (Best for lightweight web/mobile apps) | • Access via simple HTTP (GET/POST/PUT/PATCH/DELETE). • No WSDL or strict contract needed; supports JSON and XML- Synchronous only—client waits for response. • Respects user‑level security and sharing- Default: each record is a separate transaction; “all‑or‑nothing” optional. • Composite endpoints let you batch multiple operations in one call- Use Bulk API 2.0 for >2,000 records |
| Apex SOAP Web Services (Suboptimal) | • Expose custom Apex methods as SOAP endpoints. • Use when you need full transactional control or custom logic before commit. • Requires writing and maintaining Apex code- Not supported for platform events. |
| Apex REST Services (Suboptimal) | • Expose Apex methods as REST endpoints (annotated classes). • No WSDL—clients just send HTTP requests and process JSON/XML. • Can use composite resources for multi-step transactions- Requires custom Apex code- Not supported for platform events. |
| Bulk API 2.0 (Optimal for bulk data) | • Asynchronous, REST‑based API for large data loads (>2,000 records). • Submit batches that run in the background. • Same security model as REST API- Ideal for data migrations or mass updates. • Also supports publishing platform events (create only). |
Important Considerations
- Timeliness:
- Remote calls into Salesforce must complete swiftly (within 120 seconds).
- For multi-record operations or longer processing, consider Bulk API 2.0 or chunked Composite calls to avoid timeouts.
- Error Handling and Recovery:
- The remote system (or middleware) should catch and log any API errors (authentication failures, timeouts, validation errors).
- Implement retry logic for transient failures; use exponential back-off to avoid API-throttling.
- If you require guaranteed delivery, incorporate dead-letter queues or alerting so administrators can address persistent failures.
- Idempotency:
- Design calls so that repeated invocations do not create duplicate records.
- Use External ID fields or upsert operations. This applies both in standard REST/SOAP calls. It also applies in custom Apex services. This approach allows for safely retrying without side effects.
- Security:
- Always use HTTPS/TLS for API communications.
- Authenticate via OAuth 2.0 (recommended) or valid session IDs, caching tokens to minimize login calls.
- Restrict access by IP whitelisting, profile or permission-set controls, and least-privilege integration users.
Example
A logistics provider wants its warehouse management system (WMS) to notify Salesforce when a shipment is dispatched so that account and order records in Salesforce reflect real-time shipment status:
- Authentication:
- The WMS obtains an OAuth access token from Salesforce using the client-credentials flow.
- REST Notification:
- When a shipment is dispatched, the WMS issues an HTTP POST to Salesforce’s standard Order REST endpoint:
- Processing in Salesforce:
- A trigger on the Order object fires to update related Shipment__c records and to log the status change.
- SOAP Query (Optional):
- Later, the WMS can perform a SOAP API query. It will retrieve any customer feedback stored back in Salesforce. It will also retrieve delivery confirmations.
This scenario demonstrates:
- Synchronous request-reply communication where the WMS waits for Salesforce’s HTTP 200 response before proceeding.
- Use of external identifiers (Salesforce Record IDs) to target specific records.
- Real-time updates via REST API without requiring batch jobs.
- Secure OAuth authentication and token reuse for performance.
Refer Salesforce Documentation for more details – Remote Call-In
5. UI Update Based on Data Changes
This integration pattern allows Salesforce to deliver real-time user interface updates during events. This ensures users receive up-to-date information without the need to manually refresh their screens. Salesforce refreshes its user interface dynamically based on real-time data changes, driven by external systems or internal triggers.
For instance, a customer service representative (CSR) is assisting a customer with an outstanding payment. After the payment goes through an external system, Salesforce’s UI updates instantly. It shows the payment status. This enables the CSR to continue their work seamlessly.
- Benefits
- Improved User Experience: Users receive real-time updates in their workspace, avoiding interruptions or manual refreshes.
- Increased Efficiency: Ensures users have the latest data, enabling faster and more accurate decision-making.
- Minimized Errors: Eliminates risks associated with outdated or stale data in the user interface.
- Limitations
- Implementation Complexity: Requires custom solutions like Platform Events, Streaming APIs, or Push Topics for real-time updates.
- Governor Limits: Dependent on Salesforce’s limits for events or data push mechanisms.
- Network Dependency: Real-time updates depend on consistent network connectivity.
When to Use the UI Update Based on Data Changes Pattern
This pattern is ideal when Salesforce users need to see real-time updates without refreshing the screen, particularly in time-sensitive workflows.
Here are key considerations to determine if this pattern is suitable for your integration:
- Does the data need to be stored in Salesforce?
- Use this pattern if the updated data must be part of Salesforce records for reporting or further processing.
- If the data is transient or purely for display, consider external UI layers.
- Can a custom user interface layer be built?
- If standard Salesforce UI features cannot meet the requirements, create a custom Lightning Web Component. Alternatively, develop a Visualforce page to handle real-time updates.
- Will the user have access to invoke the custom interface?
- Make sure users can access the necessary UI components. They should have the permissions to view or interact with the updated data.
- What triggers the update?
- Identify the event source, whether internal (e.g., changes in Salesforce data) or external (e.g., updates from a payment processor), to define how and when the UI should refresh.
- Are the updates frequent?
- Assess the volume and frequency of updates to ensure system scalability and avoid overwhelming users with excessive changes.
| Solution (Fit) | Details |
|---|---|
| Salesforce Streaming API with PushTopics (Best) | • Real-time updates for Salesforce UI based on specific record changes. • Uses PushTopic to define trigger events and data to include in notifications. • Requires a custom UI (Lightning Component or Visualforce Page) to subscribe to the channel and display updates. |
| Platform Events (Best) | • Highly scalable for event-driven architectures. • Use when updates need to be published from external systems as well as Salesforce. • Requires custom Lightning Web Component or Visualforce Page to handle events and update the UI. |
| Custom Polling Mechanism (Suboptimal) | • Periodically queries Salesforce for updates and refreshes the UI. • Inefficient and increases API usage. • Adds delay between record changes and UI updates, affecting real-time responsiveness. |
By addressing these considerations, you can determine whether the UI Update Based on Data Changes pattern aligns with your integration needs. This ensures seamless, real-time updates in the Salesforce user interface, enhancing user productivity and experience.
Important Considerations
- Timeliness
- Notifications must reach the user interface quickly. Use a low-latency event delivery mechanism (e.g., Streaming API).
- Error Handling and Recovery
- Handle missed notifications due to temporary disconnections by providing users with manual refresh capabilities.
- Security
- Always use HTTPS for secure communication. Authenticate users and validate events to ensure data integrity.
- Scalability
- Evaluate the number of concurrent subscribers to ensure the solution handles the expected load without delays.
Example
A telecommunications company manages customer cases in Salesforce. Managers need real-time notifications when a case is closed with a resolution marked as “Successful”.The solution involves:
- Creating a PushTopic when a case is saved with
Status = 'Closed'andResolution = 'Successful'. - Creating a custom user interface like Lightning Component or Visualforce Page to be used by users.This will use the Salesforce Streaming API (CometD library) to subscribe to the PushTopic.
- Render notifications in the custom UI dynamically based on the event payload. For example, display an alert with case details.
This example demonstrates:
- Real-time updates to the user interface without requiring manual screen refreshes or additional user action.
- The use of Salesforce Streaming API for efficient event-driven updates, reducing the need for custom polling mechanisms.
- Enhanced productivity and situational awareness for managers by delivering timely and actionable notifications.
Refer Salesforce Documentation for more details –UI Update Based on Data Changes
6. Data Virtualization
This integration pattern enables Salesforce to display and interact with external data in real time. It does this without persisting the data locally, by using external objects and OData connectors.Salesforce issues a live call to the external system when a user views or queries an external object. This is done via OData 2.0/4.0 or a custom adapter. It fetches the current data. The data is then rendered in list views, detail pages, or custom Lightning components. It appears as if it were native Salesforce data.
For example, a sales representative checks inventory levels before committing to a customer order. The rep does not maintain a copy of the inventory table in Salesforce. Instead, they navigate to an external object that points to the ERP system’s stock data. Salesforce retrieves the real‑time availability via the OData adapter. It displays the data instantly. This enables the rep to make informed decisions without ever leaving the Salesforce UI.
Benefits
- Always current: Users see live data directly from the source system, eliminating reconciliation overhead.
- Storage savings: No need to replicate large or sensitive datasets into Salesforce storage.
Limitations
- Performance variability: Dependent on external system response times and network latency, which can affect user experience.
- Limited features: Reporting and some UI features (e.g., certain types of joins) be not fully supported on external objects.
Use Cases
- Catalog browsing: Displaying product definitions from an ERP in Salesforce CPQ without storing them locally.
- Regulatory data: Viewing compliance records from a specialized system in Service Cloud without duplicating sensitive data.
| Solution (Fit) | Details |
|---|---|
| Salesforce Connect (Best) | • Use Salesforce Connect to access external data (SAP, Oracle, Microsoft) in real time without data replication. • Maps external tables to external objects updated live.• Supports query, create, update & delete operations. • Native UI integration: list views, detail pages, tabs, layouts.• Adapters: OData 2.0/4.0, Cross‑Org, or custom Apex Connector Framework. |
| Request & Reply (Suboptimal) | • Use Salesforce web service APIs (SOAP or REST) for on‑demand fetch/update of external data. • SOAP API: Consume WSDL, generate proxy Apex, invoke synchronous callouts from Visualforce/Apex. • REST API: HTTP callouts (GET/POST/PUT/DELETE) via Apex HTTP classes, triggered by user actions. • Requires UI customization and not suited for bulk or automated processes. |
Refer Salesforce Documentation for more details –Data Virtualization
Comparison of All Patterns Solutions

Comparison of All Patterns
| Pattern | Type | Timing | Data Handling Capacity | Key Benefits | Key Limitations |
|---|---|---|---|---|---|
| Request & Reply | Process/Data | Synchronous | Low | Real-time feedback, simple error flow | Timeout/governor limits, low capacity |
| Fire & Forget | Process/Data | Asynchronous | Medium to High | Non-blocking, decoupled processing | Delayed error handling, reliability work |
| Batch Data Synchronization | Data | Scheduled | Very High | Handles large volumes, off-peak loads | Data latency, complex recovery processes |
| Remote Call-In | Process/Data | Synchronous/Async | Low to High | Centralized rules, flexible access | API quotas, network dependencies |
| UI-Based Updates | User Interaction | Real-time | Medium | Improves user experience, enables instant feedback | Resource-intensive, dependent on UI framework |
| Data Virtualization | Virtual | Real-time | Medium | Live data, storage savings | Performance variability, limited UI/reporting features |


Leave a comment