Overview

Product video
Control-M Managed File Transfer (MFT) is a Control-M add-on that uses FTP/SFTP client and server solution that enables you to watch and transfer files from a local host to a remote host, a remote host to a local host, or a remote host to another remote host. Control-M MFT uses industry standard protocols, such as FTP (based on RFC 959) and SFTP and does not require installation on remote computers "Price is per : 100 Tasks (of MFT Add-On License)
[Capacity for this add-on must match the total number of Control-M Platform Tasks Licenses. For example if you own 300 BMC Platform Licenses, than, you must purchase a total 300 of the add-on license]"
Control-M Managed File Transfer (MFT) is an FTP/SFTP client and server solution that enables you to watch and transfer files from a local host to a remote host, a remote host to a local host, or a remote host to another remote host. Control-M MFT uses industry standard protocols, such as FTP (based on RFC 959) and SFTP and does not require installation on remote computers "Price is per : 100 Tasks (of MFT Add-On License)
[Capacity for this add-on must match the total number of Control-M Platform Tasks Licenses. For example if you own 300 BMC Platform Licenses, than, you must purchase a total 300 of the add-on license]
Highlights
- Schedule and manage your file transfers securely and efficiently with Federal Information Processing Standards (FIPS) compliance and policy-driven processing rules
Details
Introducing multi-product solutions
You can now purchase comprehensive solutions tailored to use cases and industries.
Features and programs
Buyer guide

Financing for AWS Marketplace purchases
Pricing
Dimension | Cost/hour |
|---|---|
t3.2xlarge Recommended | $0.626 |
Vendor refund policy
We do not currently support refunds, All fees are non-refundable.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
64-bit (x86) Amazon Machine Image (AMI)
Amazon Machine Image (AMI)
An AMI is a virtual image that provides the information required to launch an instance. Amazon EC2 (Elastic Compute Cloud) instances are virtual servers on which you can run your applications and workloads, offering varying combinations of CPU, memory, storage, and networking resources. You can launch as many instances from as many different AMIs as you need.
Additional details
Usage instructions
Control M Installation Products
-
Control-M v9.0.21 Platform Installation Description Control-M is a workload automation solution that enables you to automate the scheduling and processing of your business workflows across various platforms and applications from a single point of control Link: https://documents.bmc.com/supportu/9.0.21.000/en-US/Documentation/Introduction_to_Control-M_Installation.htm Installation File Type: ISO Install File Path: C:\Install\Control-M\Control-M.iso Link to the installation guide: https://documents.bmc.com/supportu/9.0.21.000/en-US/Documentation/Control-M_Server_installation.htm
-
Control-M Workload Archive Description Control-M Workload Archiving is a Control-M add-on that enables you to automatically archive job log and output data, from both Mainframe and Distributed systems, in a secure and central repository that is separate from the production environment. When Control-M/Server submits a job to run on an Agent, the Workload Archiving Server archives the job log and output in a separate PostgreSQL or Oracle database for a defined period based on Workload Archiving Policies Link: https://documents.bmc.com/supportu/9.0.21.000/en-US/Documentation/Control-M_Workload_Archiving_installation.htm Installation File Type: ZIP Install File Path: C:\Install\Control-M\ Control-M Archive.zip Link to the installation guide: https://documents.bmc.com/supportu/9.0.21.000/en-US/Documentation/Control-M_Workload_Archiving_installation.htm
-
Control-M Workload Change manager Description Control-M Workload Change Manager is a Control-M add-on, which enables you to do the following: In Control-M Workload Change Manager web application, application developers/analysts or the web users, can request changes to business job flows by creating and submit them as requests to a Control-M scheduler or check them in to the Control-M Database. These change requests are related to your Control-M definitions in Control-M. In Control-M, a Control-M Administrator can create standards to assist schedulers and web users in defining folders/jobs according to your organization's standards link: https://documents.bmc.com/supportu/9.0.21.000/en-US/Documentation/Control-M_Workload_Change_Manager_installation.htm Installation File Type: ZIP Install File Path: C:\Install\Control-M\ Control-M WCM.zip Link to the installation guide: https://documents.bmc.com/supportu/9.0.21.000/en-US/Documentation/Control-M_Workload_Change_Manager_installation.htm#Installi
-
Control-M Manage File Transfer Description Control-M Managed File Transfer (MFT) is an FTP/SFTP client and server solution that enables you to watch and transfer files from a local host to a remote host, a remote host to a local host, or a remote host to another remote host. Control-M MFT uses industry standard protocols, such as FTP (based on RFC 959) and SFTP and does not require installation on remote computers. Link: https://documents.bmc.com/supportu/9.0.21.000/en-US/Documentation/Control-M_Managed_File_Transfer_installation.htm Installation File Type: ZIP Install File Path: C:\Install\Control-M\ Control-M MFT.zip Link to the installation guide: https://documents.bmc.com/supportu/9.0.21.000/en-US/Documentation/Control-M_Managed_File_Transfer_installation.htm
Resources
Vendor resources
Support
Vendor support
support-matrix@matrix.co.il Fast-response support channel that is staffed 9x5 with experienced and technical support engineers, based on the SLA offered. (Production failure - up to 1 hour to respond)
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Similar products
Customer reviews
Centralized automation has transformed complex workflows and now ensures timely, reliable jobs
What is our primary use case?
I have been using Control-M for more than six years. Initially, it was mostly just monitoring the jobs, but now I also do some troubleshooting around that.
My main use case for Control-M these days involves multiple jobs running in our contact center systems. We have multiple nodes to begin with, and some of them are responsible for maintaining the predictive dialer calling list for records sourced from multiple platforms. Along with this, we also have certain jobs deployed for our reporting purposes, where our databases are synchronizing with other Genesis databases. Additionally, we have multiple log archiving systems or jobs that have been deployed as well. We have some ServiceNow jobs that trace and manage the employee profiles, and then we have some speech-related Nuance jobs scheduled as well.
One of the major use cases of Control-M that we use is our log archival process. This process integrates file movements with job scheduling and enables secure file transfer by using both FTP and SFTP file transfers. It triggers the job when the file arrives, and then it also validates the file completion and size before actual processing. So, in the contact center cluster, one of the jobs that we have is the Informat job that extracts the caller data from Informat and transfers it to various downstreams such as BIH or Connect Direct. Apart from this, we also have various SQL stored procedure purging jobs in Genesis, and there is one main, important Cassandra job that runs on the Cassandra nodes, selected for incremental backing up. The Pulse housekeeping, where the job runs and cleans the ECP snapshots every 30 minutes, is one of the major, significant jobs that we use. Along with this, we also have a cyclic job that runs every 15 minutes on each of the MCP nodes. Every 15 minutes, it resyncs the job, basically for the audio file resyncing that happens from one of the applications to a given directory. This means the most recent file that has been uploaded is put into all the MCP boxes every five seconds, and then the right announcement gets picked for the user to hear.
The log job archival basically copies and archives all the Genesis log files for a period of retention given. It logs the files from site one to a specific site location and site two to another specific site location. This is not only in production; it is for all environments including Dev, SIT, and QA that we have. We have also automated that all archived log files older than three days are gzipped, and all these files will be moved to a different archive location than the location that it has initially been sent to. It also makes sure that we are masking and the schedules are followed, which are not getting archived.
What is most valuable?
The best features Control-M offers that make all this possible for me include the job scheduling, which is most importantly critical. It enables us to schedule jobs across multiple platforms such as Unix and Windows together, and also the jobs running at very specific times help eliminate a lot of manual task execution by triggering based on either a file arrival or even a system event. It also enables us to run the jobs in the right order. Along with this, we also have the data pipeline and ETL automation, which helps various data engineering and analytic teams automate the Hadoop jobs and trigger downstream analytics after the data ingestion. All the ETL processes are managed better in terms of both data validation and quality checks. Additionally, the business-critical processes meet deadlines, for example, the ServiceNow data that we have to receive before 8:00 AM in the morning, or the month-end or quarter-end batch runs that need to happen, are done in a timely and accurate fashion.
The job scheduling and sequential jobs have been the most important feature of all. The rsync specifically, where the cyclic jobs run every 15 minutes without any manual intervention, makes sure that the process is streamlined and does it without any manual intervention, which helps a lot.
Along with this, end-to-end workflow orchestration, which is basically event-driven or file-driven, differentiates Control-M from any other basic schedulers. It is not just about running a job on a schedule, but it also enables complete business workflow from an application to multiple platforms and multiple environments. Dependency-based execution ensures that the previous job or the upstream job has completed before starting with the event, and multiple other conditions can also be set. The cross-technology enablement allows one workflow to span across multiple systems, from cloud services to databases to Unix and Windows, providing a single point of control for everything.
What needs improvement?
Control-M is a very nice product that is practical, but it is challenging to understand how certain features work. The UI and user experience sometimes feel complex and can be simplified a little bit to provide cleaner dashboards. The major complexity is the licensing complexity and access-related challenges.
Simplifying the UI can provide us better use of the application itself. Probably some more documentation around how to use the schedules or the alerting systems would also be helpful.
What do I think about the stability of the solution?
Control-M systems have been stable even during upgrades and patches, with very minimal disruptions to the system, so it has been stable throughout.
What do I think about the scalability of the solution?
We started using Control-M with very few teams starting with about 50 users, but now we have about 3,000 plus users using Control-M in my organization.
How are customer service and support?
Control-M customer support has been good, but we have not had the opportunity to extensively talk to them because we have an in-house support team that we reach out to before contacting the actual BMC vendor.
Which solution did I use previously and why did I switch?
The other alternatives that we previously used were mostly cron jobs and other system jobs. We briefly used IBM workload automation but did not proceed with that. We also used Jenkins with some plugins, but ultimately, we did not pursue alternatives such as AutoSys. I believe Control-M is hard to replace.
The organization explored AutoSys and IBM workload automation before ultimately choosing to go ahead with Control-M.
What about the implementation team?
We have a team of 25 to 30 members who are responsible for the deployment and maintenance of the Control-M setup. Our team includes architects and designers as well as deployment and support personnel.
What was our ROI?
I see a return on investment with Control-M. The other challenge we currently face is that they have started charging us, which is more of an enterprise-level decision, as they began charging us for each job run we have.
What's my experience with pricing, setup cost, and licensing?
I do not have a major role in terms of pricing, setup cost, and licensing. Our team was only not allowed to access Control-M for a certain duration due to licensing constraints, which I feel is a challenge, but I was not directly involved in any of these pricing, setup, or licensing related discussions.
What other advice do I have?
The impacts that Control-M has caused for my organization have very visibly increased operational reliability. Before Control-M, most jobs were script-based, such as cron jobs, and there was a lot of dependency on manual monitoring. Until the jobs were reported as failed by the business teams, we would not have had visibility over them. Now with Control-M, we have an end-to-end workflow which is centrally managed. If a node has failed, it sends notifications, and there is a lot of error handling built in. There are multiple automatic retries, reducing human intervention. In terms of issue detection and resolution itself, we have dashboards configured that enable us to get alerted even before the businesses are impacted or the businesses report the impact, allowing us to solve issues proactively. This has also increased productivity improvement.
When one of our reporting downstreams processes data and uploads it to our systems, it used to take an hour for the data to actually reflect. Businesses would notice missing data in the systems when they consumed the data. Now, within the duration when the job runs, it counts the number of rows we have, which means if the job fails, it is notified immediately within that 15-minute duration, helping us rerun the job. This means issues that were reported in an hour's time now get reported within the duration of the job running, which is within 15 minutes, leading to a significant improvement in how we see that the reports are being run.
There is a huge user base in our organization, with about 3,000 users using Control-M. The levels of usage vary; some have read access and just view the jobs, while others perform deployments in terms of job scheduling and other tasks.
We extensively use Control-M to schedule multiple banking-related jobs in varied fields, not just the contact center. We definitely intend to increase the usage.
The biggest lesson I have learned from using Control-M is that it is a best-in-class workload automation platform, effective in building, scheduling, managing, and monitoring complex workflows, especially for critical applications such as DataOps and enterprise DevOps environments where reliability and SLAs play a major role. The cross-system orchestration matters significantly more than speed alone, as it ensures jobs run accurately and efficiently.
My advice for others looking into using Control-M is that no matter how many systems you have, Control-M is the most competent and enterprise-scalable tool available. With various requirements, it is extremely reliable in monitoring and scheduling, making it an excellent choice. I would rate Control-M an 8 out of 10 overall.
Secure file transfers have increased traceability and now simplify our end‑to‑end job management
What is our primary use case?
I predominantly use Control-M for file transfers through the SaaS version. BMC has recently added an enterprise feature for the SaaS version, and we are now using it mostly for the file transfer part and also the APIs, which has been our latest addition.
With my current project, the File Transfer Enterprise is the best use case for us in terms of secured transfers and how we can track the transfers and manage significantly more with the transfers we are doing. This is the best feature, considering the ROI as to what my current scenario was and what we have achieved with the enterprise feature.
What is most valuable?
File Transfer Enterprise is the most valuable feature for our current project in terms of secured transfers and how we can track the transfers and manage significantly more with the transfers we are doing. This feature provides the best value, considering the ROI that we have achieved with the enterprise feature.
What needs improvement?
The integrator feature is not being supported by the BMC support team, which leaves it to us to customize and integrate it. Of course, the use cases differ and that is when they have decided not to have it under the support cluster. However, having basic support on the integration integrator would help us considerably. We do a lot of research and development to achieve what we need to accomplish, but BMC has the experts and eventually they have the answers. If they include the integrator feature under the support structure, it would be greatly beneficial.
The integrator feature is handy, but it can be tricky when we are trying to integrate in terms of achieving the connection profile. Setting up the connection profile initially to get any integrator working for us is somewhat tricky with different use cases we want to achieve. It is not straightforward.
For how long have I used the solution?
I have been using Control-M since 2008 when I got into IT and started my career.
What do I think about the stability of the solution?
The SaaS version is excellent in terms of stability. For the price, it is very stable. We have not had any downtime. It has been more than two and a half years, approaching three years now since we got SaaS onto our system with no downtime at all. It is quite stable.
What do I think about the scalability of the solution?
The scalability aspect is simply a matter of getting more agents in place and since it is a SaaS version, it gets scaled up on their end. We do not have to worry much about it. Of course, the licensing comes into play if the number of jobs are increasing, but it is dynamic.
How are customer service and support?
I have recently contacted the BMC Control-M technical support team.
They are top-notch in terms of speed and quality. Most of the time, any question starts with extracting the logs and providing them to the support team, and they go through that. If they are not able to resolve the issue, they take time and put it to the research and development team. Of course, it takes a while if it goes to research and development, but they make sure that the issue is resolved. That is something great about them.
I would give the support a score of nine. I would still like to rate them ten, but some cases do take a while to get the resolution.
Which solution did I use previously and why did I switch?
Autosys has been used and is the closest alternative to BMC Control-M. There are other features and other products, but Autosys is the most used alternative. However, it is nowhere near what BMC Control-M has to offer.
How was the initial setup?
The initial deployment of Control-M was straightforward. With the current SaaS version, there is a support window of 14 days or specific hours. It is straightforward and depends on who is going to drive the deployment. For my case, I was experienced with the on-premise version as well, so that seemed straightforward for me. However, for those coming in with lesser experience, it may take some time. The documentation is excellent, so it is straightforward.
What about the implementation team?
The deployment of Control-M requires two administrators. Basically, one administrator is the minimum need, but one can do the deployment. Since we have two admins, we share the workload. One is sufficient for the task.
What was our ROI?
Pricing for Control-M is on the costlier side when it comes to SaaS pricing. However, it does take off all the hassles of maintaining the Control-M server itself. This leaves us with only managing the agent part of it. It has pros and cons to that pricing feature, but it is on a higher side. Mostly in terms of ROI, the companies and stakeholders have that complaint.
What other advice do I have?
Control-M does require maintenance on our end. There are two different windows of maintenance. One is when the core technology, in our case SAP, is getting under maintenance window, so we have to pause our jobs and resume it later on. This is a critical window that prevents our jobs from being pushed into SAP. We have to pause it and resume it depending on the schedules and make sure that we resume it and do not miss any jobs. The other window is when our agent maintenance or agent infrastructure maintenance occurs, when switching from a primary to a secondary agent, routing it, and making sure nothing is lost in the transit. Those are the two maintenance activities we perform.
We have a team of seven today, with two of us as admins. We have three schedulers and two monitoring agents.
Our engagement is with BMC. I have been involved with getting the contract rolled in for my current client and getting into the core of the technicalities in achieving the job requirements. It has been both.
We achieved the project in a month's time with Control-M. We had a project of converting and migrating our jobs from SAP workload onto the Control-M scheduler. End to end, we took less than a month to get the agents installed on the SAP infrastructure and get these jobs migrated from the SAP workload. Overall, I give this product a review rating of ten.
Automation has reduced overnight manual work and still needs better access control and logging
What is our primary use case?
Control-M is one of the scheduling tools where we can schedule the jobs overnight in contact center operations. I do not have any hands-on experience with Genesys Cloud CX . I have experience with Genesys on-premise. So far, I have not had an opportunity to work on the cloud. As of now, I'm working in one of the ANZ banks, which is located in Australia, where we are migrating from on-premise to cloud, and it is in progress. In the last quarter, they initiated the migration's first phase. It will take around two years to complete the migration. Until then, we are part of on-premise engage support.
With Control-M , I use it to automate jobs overnight. For example, if there is database purging, we will automate the job. We will set up the job in Control-M, and the job will initiate the process. Overnight, it will complete the process as per the schedule. To reduce manual work, we automate the scheduling jobs in Control-M. Not only in the database, there are a couple of Genesys jobs as well. CMI data processing and a few dialer jobs are placed on Control-M. This is for dialing processing to update the contact history data and calling list data. We will upload this data to the BH, and then to Control-M. Control-M will process the job as per the schedule, say eleven o'clock or eleven thirty. There are three schedules in the present environment. It will run automatically. If there is any patching on the Control-M servers or database servers on the end-user application side, we have to stop the applications and stop the job. Once the activity is completed, we will resume the jobs and reorder the jobs.
In my organization, Control-M is deployed on the cloud, and they recently migrated from on-premise to cloud. The console is the same, but where we are trying to access the application, we are using the CPC, the cloud environment variables. We try to launch the bank using Azure .
What is most valuable?
Control-M is one of the best solutions to automate jobs, and it should optimize manual work. It processes with the automation scripts, which helps the contact center operations run smoothly. That is the main purpose we use it for: following the schedule for callbacks and automating the scripts where it is necessary for the operations.
From my operations end, Control-M is worthy. I am not sure about the exact pricing, but it is very valuable. Control-M is the only asset where we can run all contact center operations for automating the scheduled jobs. There is a separate team that handles all the admin and maintenance processes. We are authorized only for our applications, our Genesys applications. We have restricted access where we can access our Genesys and nice jobs. For these two clusters, we have visibility to access the jobs, and we can manage them from our application end.
The positive impact I have seen from using Control-M so far is quality. It has smooth operations and scalability. Even though it is more optimized, optimization is a necessary requirement. The benefits would be to reduce manual efforts. It is flexible to operate technically and to understand the platform and solutions.
What needs improvement?
Control-M could be improved or enhanced in some areas. There are several optimization solutions out there, but Control-M is feasible. We can easily adapt and learn it. It is easy to operate in scheduling, operating, and monitoring, so it is easy to access. Even though the user interface is simple, it is very familiar to techies who can handle it. If we had complete control of the console, then we could deep-dive on the restricted access and have the specifications of restrictions. That way, we would gain more knowledge on it. As of now, it is user-friendly.
To improve the scalability, the user access controls are important. For the failover jobs, where we can see the output and the log section, there are a lot of redundancy events. We would appreciate it if they could improve event generation and optimize the log events to read the applications. If they update that, it would be great. Our market is different from the outside. Based on the initial integrations and commitments, they have been configured in such a way. Different organizations might be following different norms.
For how long have I used the solution?
I have used Control-M for the last four years.
What do I think about the scalability of the solution?
I would rate the scalability of Control-M a seven out of ten. It is feasible.
How are customer service and support?
We are not authorized to engage with customer support from Control-M. There are teams that do. Usually, if anything comes to us, we review it on our side. If there are any errors related to our application and if it is not, then we straight away engage with the Control-M team. They will reach out to the support team if required. Otherwise, if they have the knowledge to resolve it, they will resolve it.
Which solution did I use previously and why did I switch?
Initially I worked on an HSBC project where they were using the same Control-M. In previous organizations, they were not using any scheduling tools. It was an insurance company. As of now, I have only used BMC Control-M.
How was the initial setup?
I did not have the privileges for the complete initial installation and configuration of Control-M. I initiated our application jobs, where we initiate the scheduling of new jobs, making orders, progressive, and taking over from the monitoring phase. We can do that from a specific application cluster. For a complete installation, I think it would be easier. If we get control over things, it is easier for learning quickly.
What other advice do I have?
With Control-M, I have learned that there are depending teams. Control-M is one of the best solutions. Each team has several jobs. Each application might have built the jobs. If there is any cycle, like patching windows, in the specific cluster, if our Genesys jobs are running on the cycle, those jobs might failover. There are dependency applications we have to engage with, and ask them to follow up. Dependent applications might be impacted. It is a challenge to communicate with the vendors and collaborate with all the relevant application stakeholders and inform them to put their jobs on hold. If there are any jobs related to them, we have to engage with them and follow up to hold the process. Once the activity is completed, we have to roll back the application and resume the jobs. There are some challenges while performing the monthly cycle patching. They recently migrated to cloud, but we may not involve them in the cloud. That would be the best solution on the cloud. All those optimizations mean we may not need to follow up with communications. We will just inform them via email or inform all the restricted users and permissions. It is easy. Once the cloud is in place, these are all challenges.
Overall, there are challenges in communicating with internal and external teams and coordinating. That requires manual resources to follow up on every cycle patching. Before Control-M, I would use something that is easily accessible and can integrate with BMC Control-M. Rather than other solutions, I would prefer Control-M.
When I was deploying the applications, there would be a lot of permission relevancy. Based on those permissions, we have to engage with the admins, the Control-M team who have the privileged access. We ask them to join us on the deployments, and we try to gain the privileges. We take over from our application configurations. Once it is completed, they revoke the access. For change deployments, we gain privilege access, admin privilege access, from those who are authorized in the bank. We have specific teams. We cannot control their applications. Each team has their own privileges. Whatever we require, we usually ask them to provide the privilege access, and we will take it from there. Overall, we are not authorized in Control-M, for four years in the market. We are just on the application side; we are end-to-end Genesys operations. There are a couple of jobs in Genesys, and those we deploy in Control-M. From our Genesys applications end, we have a pretty good experience using Control-M, where we can schedule the jobs, run the jobs, and troubleshoot the jobs if required. I have this much knowledge about scheduling, monitoring, and troubleshooting the jobs. For the specific applications, each application is required for the specific operations, permissions, and privileges. If I get the opportunity, we will go through it end-to-end. I have completed the certification, the initial Control-M certification, where we can gain access. I would rate my overall experience with Control-M a seven out of ten.
Unified orchestration has simplified complex data pipelines and improved cross-platform dependencies
What is our primary use case?
In my previous project, we were using Control-M , and we automated the data pipelines using SQL Server Agent jobs and created the Databricks workflow. We had some data available in SQL Server and some in Databricks , and because we had two systems, the orchestration process was completely different, and we were not able to manage or create a dependency because both tools were different. That is why we implemented Control-M in the past project and automated all the SQL Server jobs and the Databricks workflow using Control-M. By using a single platform, Control-M allowed us to create a dependency between the SQL Server and Databricks data. On the reporting side, we were using the Tableau dashboard as well, and for Tableau, we were using the extract to display the data. We were refreshing the Tableau extract using Control-M. In my last project, overall all the data pipelines including the Tableau extract refresh were done using Control-M.
We expanded a lot because previously we were using multiple tools for the same orchestration purpose, such as Databricks workflow and SQL Server Agent. Now, we are using the same product or a single tool for multiple tasks, which is very helpful for developers as well as business stakeholders.
What is most valuable?
I appreciate Control-M because of the dependency it offers. As I mentioned, we had some data available in SQL Server and some in Databricks, and it was hard to create a dependency when we were working on different tools. That is why we chose Control-M so that we could create a dependency, and we had some highly critical banking data in that project. The SLA was very minimum, and we had to get the dashboard refreshed every morning at 7:00 a.m. Due to the SLA features in Control-M, we chose it in the last project.
I find that Control-M provides a single UI platform where I can monitor all the jobs. Previously we had different jobs, so we had to monitor each job individually. With Control-M's single platform UI, we can monitor all jobs. The main benefit is that Control-M has a retry functionality, so if any job fails during execution or due to bad data quality, we can retry the job. Once we receive the data, the job can execute automatically. The alert mechanism also triggers emails to business stakeholders whenever any job fails. These are the main features I prefer about Control-M.
Previously, we set up alerts so that whenever there was a delay in the file, it automatically sent alerts to business stakeholders indicating the file's unavailability. Whenever there was a delay, it triggered an email to notify that we were expecting the file at a certain time. Additionally, we set up a file-based trigger. Since the time of file arrival is not consistent, we configured the job to execute automatically when the file arrives, ingesting the data into our final database. This file-based trigger was a key feature we explored.
What needs improvement?
I think the pricing is a factor, and it is high. I am currently working in a multinational company that has purchased the premium enterprise-level license for all developers, so it is not a big deal for our project. However, someone in a small company or startup might face pricing constraints while implementing Control-M, as the pricing seems a bit high compared to other tools such as Airflow .
One area for improvement in Control-M could be pricing, and another is the learning curve. I feel that when someone starts working with Control-M, they need at least one month to onboard and understand all the features. Although documentation is available, understanding all features takes time. Another recommendation would be for UI improvements, as I felt the UI seemed outdated.
I feel that it is a little bit difficult to integrate Control-M with technologies for DataOps and DevOps processes, especially initially, as I needed about one and a half months to understand the complete features and flexibility of this tool. From a developer standpoint, it is not very user-friendly. However, once I become skilled in this tool, it provides great flexibility.
For how long have I used the solution?
I have been working with Control-M for six plus years.
What do I think about the scalability of the solution?
I faced a performance issue once because we created a very large data pipeline with multiple dependencies in Control-M. So, we narrowed down one workflow into multiple sub-workflows, which improved performance. Processing a large amount of data can be complex and time-consuming.
How are customer service and support?
I can raise a support ticket for BMC software whenever I have any technical issues, and they respond within a three-day SLA, providing full support.
I would rate the tech support of Control-M as 8.5.
Which solution did I use previously and why did I switch?
I evaluated Airflow before choosing Control-M. In Airflow, we faced a similar situation because we had to create different cron jobs for each Python script. We had 100 plus Python scripts fetching data from multiple source systems, and in Airflow, creating dependency between each cron job was very hard. That is why we switched from Airflow to Control-M.
How was the initial setup?
The initial setup process was done by our infrastructure team. I worked as a developer to create jobs, but the actual setup was quite good and well-supported by BMC software.
Our initial setup was completed with full support from the infrastructure team. After that, the workflow creation and job creation in Control-M were entirely managed by our developers.
What about the implementation team?
The initial setup process was done by our infrastructure team. I worked as a developer to create jobs, but the actual setup was quite good and well-supported by BMC software.
Our initial setup was completed with full support from the infrastructure team. After that, the workflow creation and job creation in Control-M were entirely managed by our developers.
What was our ROI?
I think the benefit is very high. If a company does not have any budget constraints, they should definitely explore Control-M because it allows for end-to-end orchestration of the project without needing separate projects for the data pipeline and downstream applications such as reporting. All tasks can be accomplished using one product, providing significant value if budget constraints are not an issue.
I find it cost-effective, but I am not fully certain about the overall ROI.
What other advice do I have?
The biggest lesson I learned from using Control-M is that it provides a single UI to monitor all jobs, making it much easier compared to my current project where I use Airflow, which involves managing multiple cron jobs across different tabs.
We do not have any direct contact with BMC software, so I would not describe the relationship as transformative.
I rate Control-M as nine because it simplifies complex data structures and pipelines.
Automated scheduling has streamlined our data pipelines and improved cross-platform workflows
What is our primary use case?
I am currently working as a Data Engineer at Cognizant. I have been using Control-M for the past eight months since I joined Cognizant as a Data Engineer. As a Data Engineer, my job is to monitor jobs and maintain pipelines, and Control-M is a scheduler tool which we use to schedule jobs by linking the jobs as predecessors and successors so that the flow of the data pipelines continues without human interference.
The daily important task which we are monitoring is the SaleRPT report, which gives business users the sales that happened the previous day in a restaurant at our project in Cognizant. The jobs are connected in such a way that starting, there are replication jobs, and then they are connected to SQL Server to transform the data and load it into Oracle SQL. From there, again, the data is loaded into our data warehouse tables, and the final target tables are Essbase . So this total flow has around 17 to 18 jobs which are scheduled to run twice a day when we get EOD clearance for each site. So these are the latest tasks for which I used Control-M to schedule jobs in a sequential manner.
In our legacy system, there are some Informatica jobs and some SnapLogic jobs. For example, there are three sets of jobs which are from Informatica, and the next successor jobs are from SnapLogic . Control-M allows us to link these Informatica jobs to SnapLogic. If the Informatica job is completed, it would automatically trigger the SnapLogic pipeline. So it allows the usage of multiple tools. For DataOps and DevOps, it is quite important to use Control-M, as it is a scheduler which schedules multiple jobs based on our requirement. We can easily change the schedule for a particular day if we have a lesser number of data. And if there is any data miss, we can also easily reprocess using Control-M by putting a few jobs on hold and running the jobs manually. So I think it is quite extensively important to use Control-M for a Data Engineer at any level.
There are multiple teams which are using Control-M. I think there are nearly 80 to 90 employees who are using Control-M tool in my organization in my current project at Cognizant. Mostly, 60 to 70 percent of them are Data Engineers. Some are from the BI ETL, Business Intelligence ETL team, and some are from the DevOps team, and some are part of the development team also. And some are part of the Aloha Insight team. These are the teams which I know which are currently using Control-M.
What is most valuable?
I have been using Control-M to monitor and maintain pipelines. It helps us schedule jobs by linking them as predecessors and successors, ensuring the continuous flow of data without human interference. Control-M is the most used tool in my current project and is essential for job scheduling and checking job failures. Its easy interface makes it beginner-friendly.
Control-M's ability to link jobs from different tools such as SnapLogic, Informatica, and GCP DAGs enhances its functionality. The scheduler, ad hoc runs, and job linking features are particularly useful. It allows job connections to various tools and notifies us via email of any job failure, providing logs for quick rectification.
It can save us significant time, reducing errors and the time taken to rectify them. Automatic failure notifications enable rapid response, facilitating efficient job management. Control-M enables development on various platforms, which is essential for DataOps and DevOps operations.
Its user-friendly nature allows quick learning and management of tasks, with significant time savings compared to manual processes. We now receive automated failure notifications, which streamline error rectification and job reruns. Control-M's integration with Informatica and SnapLogic further exemplifies its efficiency.
What needs improvement?
One thing I find challenging is if a job is executing and we put it on hold, then if a job is an Informatica or SnapLogic job and we put it on hold, the corresponding pipeline in Informatica or SnapLogic would still be executing. We need to again go to that tool and kill the job. Rather, it would be easier if we kill the job in Control-M and it would automatically be killed in Informatica or SnapLogic.
In some cases, some jobs go into a waiting state. So again, we need to change the Control-M settings for that particular job manually to transform it into the normal flow. These are the two things that if they are changed, Control-M would be an even better tool.
For how long have I used the solution?
I have been using Control-M for the past eight months since I joined Cognizant as a Data Engineer.
What do I think about the stability of the solution?
We have never experienced any licensing or any security issues from Control-M. My manager and the other members of my upper hierarchy manage the pricing. Since I have been using Control-M for the past almost one year, I have never experienced any security or software issues in it.
What do I think about the scalability of the solution?
Control-M is easily scalable. I would rate it a nine out of ten when it comes to scalability of Control-M.
How are customer service and support?
I have not used customer support until now, as the monitoring and the management of Control-M is done by another team. However, the other team which currently manages Control-M has helped us a lot.
Which solution did I use previously and why did I switch?
When I was deployed into this project, Control-M was already in use, so I have not chosen or compared Control-M with other tools. Since I have been using it, I have not experienced any flaws or any issues.
What about the implementation team?
For development, maintenance, and changing, I think around four to five people are enough for monitoring. For development, we need quite a lot of them. Once it is developed, only three to four people can easily manage Control-M.
What other advice do I have?
I would recommend Control-M to most people. When it comes to metrics, I am not sure on how much the tool has saved us, but I am quite sure that it saved us a lot of time.
For scheduling, Control-M is the first tool which I have used. Along with Control-M, I am also using DAG monitoring, which is already enabled in GCP, which is almost similar to a scheduler.
We can easily depend on it to schedule the jobs and monitor them. I am already using it quite much for my daily tasks for my project. I am satisfied with the way I am using it and the features it is allowing me.
One thing is how easy it is to use. Anyone, if they open Control-M and look at the jobs, they can easily know how to run a job, how to kill a job, how to put it on hold, how to check the logs, when it started, when it ended, whether it is running fine, or if there are any anomalies in the job. So I would recommend it. I advise them that it is a good tool. I would rate this product an eight out of ten.