Innovation-driven Engineering
Engineered for the Future, Designed for Today
Pioneering AI/ML solutions, cloud engineering, and modern user experiences to transform how enterprises operate, innovate, and grow.
Our Engineering Core Principles
At Ramco, we're guided by these engineering principles that shape how we build and deliver products

AI-Driven Intelligence
We harness the power of Artificial Intelligence and Machine Learning to automate processes, drive insights, and deliver predictive outcomes across the enterprise.

Composable Architecture
Our platforms follow a modular, API-first approach, enabling agile deployment, seamless integration, and scalable innovation—tailored to changing business needs.

DevSecOps Culture
Security is not a checkpoint—it’s continuous. Our DevSecOps model embeds security into every phase of the software lifecycle, enabling safe and rapid delivery.

Cloud-Native & Resilient
Built for the cloud, our systems are elastic, resilient, and self-healing—ensuring high availability, global scalability, and fast disaster recovery.
Featured Articles

Scalable and Searchable Audit Trail with Elasticsearch: Ramco’s Modernized Approach
In an era where auditability, transparency, and compliance are critical, Ramco is modernizing its systems with a scalable and intelligent audit trail architecture.

Modernizing Bulk Processing
Ramco Applications had the challenge of processing for high volume wherein the processing time was high and had the limitation of processing only one at a time as the entire logic and processing was happening in the database layer.
Automation
Modernizing Bulk Processing
Shanmugam S
Jun 5, 2025
3 min read

Challenge
Ramco Applications had the challenge of processing for high volume wherein the processing time was high and had the limitation of processing only one at a time as the entire logic and processing was happening in the database layer.
Goals
The following goals was set when the modernization journey was undertaken
- Faster processing outside the database
- Move away from writing stored procedures for business logic
- Scale processing to millions of work units
- Build it generic and not for any specific use case
- Autoscaling of infrastructure based on processing needs
Ideation
With these goals in mind, we evaluated several technologies, including Apache Spark, before ultimately selecting our internal in-memory engine. The in-memory engine processes all data in-memory, significantly reducing I/O overhead and accelerating computation.
To make the system accessible and flexible, we designed a domain-specific language (DSL) for expressing business rules and mathematical expressions. This approach empowers functional consultants to define and modify processing logic without deep technical intervention.
For scalability, we defined the unit of work that can be taken up. Multiple agents can operate in parallel, each handling a set of these work units, while leveraging multi-threading for further speedup. To ensure the solution was generic, we introduced a configurable process flow—akin to BPMN—that orchestrates and executes DSL rules for any bulk processing scenario.
Data ingestion and output were also generalized: the in-memory engine reads various entities from the database and writes results back in a standardized way, eliminating repetitive coding for each use case.
Finally, by containerizing the in-memory engine and deploying it on a serverless container platform, we enabled true autoscaling. This ensures infrastructure is provisioned only as needed, supporting peak loads efficiently without incurring unnecessary costs during idle periods.
Transition & Implementation
Migrating from a database-centric approach to an in-memory, distributed processing model required careful planning and incremental steps. In the use cases we took, while some standard logic is handled by the product, pay element-specific computation logic is often tailored to each customer’s requirements. This meant we had to reverse engineer existing stored procedures and decompose them into discrete BPMN activities, forming a configurable process flow.
The customized logic was reimagined as DSL-based rules, making them easily configurable and maintainable. We began by identifying the most resource-intensive processes and systematically re-architected them to leverage the in-memory processing capabilities. This transition enabled us to decouple business logic from stored procedures, moving it into a scalable and maintainable application layer that supports parallel and distributed execution.
Architecture
During our modernization journey, we evaluated various architectural patterns and determined that Space-Based Architecture (SBA) was the best fit for our requirements. In SBA, application components are distributed across multiple nodes, enabling efficient bulk processing that can serve multiple environments or tenants simultaneously.
A key advantage of this architecture is its ability to offload data processing from the database: data is extracted, processed externally, and the results are published back to the database. This approach addresses several critical requirements for modern bulk processing systems:
- Distributed Processing: Workloads are spread across multiple nodes, ensuring efficient use of resources.
- Data Partitioning: Data is divided into manageable segments, allowing parallel processing and improved performance.
- High Availability: The system is resilient to node failures, ensuring continuous operation.
- Scalability: Resources can be dynamically added or removed to handle varying workloads.
- Fault Tolerance: The architecture is designed to recover gracefully from failures.
- Loose Coupling: Components interact with minimal dependencies, making the system flexible and maintainable.
- Asynchronous Communication: Tasks are processed independently, reducing bottlenecks and improving throughput.
- Event-Driven Architecture: Processing is triggered by events, enabling real-time responsiveness.
- Dynamic Scaling: The system automatically adjusts resources based on demand.
- Data Consistency: Mechanisms are in place to ensure processed data remains accurate and reliable.
By adopting Space-Based Architecture, we built a robust, scalable, and flexible foundation for bulk processing that meets the evolving needs of our customers.
In simple terms the modernized application now works as described in the following diagram:
For the in-memory engine containers running on a serverless container platform, we leveraged both AWS Fargate and Azure Container Instances. To provide flexibility and cloud-agnostic deployment, we implemented the starting and stopping of containers as APIs using the façade pattern. This abstraction allows us to seamlessly support multiple cloud providers and easily extend support to new platforms as business needs evolve.
This type of dynamic infrastructure requires operational teams to actively monitor incoming workloads, track container lifecycle events, and ensure resources are efficiently utilized. To support this, we developed a comprehensive dashboard that provides real-time visibility into workload status, container start and stop events, and overall system health. This enables the team to proactively manage processing jobs and optimize resource allocation.
Adoption & Results
The adoption of the in-memory engine marked a turning point in our bulk processing journey. By shifting business logic out of the database and leveraging in-memory computation, we achieved a multi factor improvement in processing speed. The new system’s generic design means it can be reused for any bulk operation, and its autoscaling capability ensures we can meet demand without over-provisioning resources.
Next Steps
Given its generic design, our bulk processing framework can be extended to multiple use cases.
A key challenge ahead is enabling existing customers—who currently rely on database-centric logic—to upgrade to the new in-memory engine-based processing. This involves analyzing and reverse-engineering legacy stored procedures, then translating that logic into DSL rules compatible with the in-memory engine. To streamline this migration, we are exploring the use of AI-driven techniques to automate the extraction and transformation of business logic. We look forward to sharing more about this migration strategy in a future article.

Shanmugam S
Director – Engineering
Specializing in enterprise software solutions. He brings extensive experience in architecting and delivering complex technology implementations across industries.
Share this article
Shape the Future with Ramco
We're looking for passionate individuals to join our growing team. Explore opportunities that allow you to make an impact and grow your career in a supportive environment.
View All Open Positions