- 1 Introduction
- 2 What is Argo Workflows?
- 3 Why Do I Need Argo Workflows?
- 4 The Case Against Argo Workflows
- 5 Mitigating Factors
- 6 Conclusion
I’ve been getting down and dirty with Argo Workflows over the past few months as part of my day job. I’ve been evaluating it for use as a workflow automation tool for some risk analytics and I thought I would share some of my experiences. This is the first part in that series explaining what argo workflows is and what it can bring to you and your company.
What is Argo Workflows?
Argo Workflows describes itself as “an open source container-native workflow engine for orchestrating parallel jobs on Kubernetes”. In plain english, it’s a tool for chaining simple kubernetes jobs/pods together into useful workflows. When I say chains, I really mean DAGs (directed acyclic graphs) which means you can build up very complicated workflows indeed!
Why Do I Need Argo Workflows?
Every company I’ve worked in has, over time, accumulated a lot of workflows. Chances are that over time, you’ve accumulated workflows too. Don’t believe me? take a look at your cron jobs, windows scheduler, or wherever you initiate your batch processing. Those jobs probably include some form of moving data around and/or processing data, i.e. they’re workflows!
If those jobs have been built without the aid of a workflow tool, then they’re hidden workflows. A hidden workflow (a term I just invented by the way) is a workflow that is not easily visualised: you can’t see what’s going on inside it.
By porting these workflows into Argo, you gain visibility through Ago’s graph visualiser. This lets you quickly get a feeling for what your workflows look like, and hence a better understanding for what they’re doing.
The complexity comes in when there isn’t a clear line between the workflow code and the job code in your existing batch jobs. If you have beautifully architected batch jobs with amazing separation, this doesn’t apply to you, but who spends time architecting their batch jobs?!
In my experience batch jobs are hacked together in a low level scripting language (bash, cmd), with a smattering of high level languages thrown in only when the scripting language really couldn’t get the job done. The work they’re doing is boring, scripting language aren’t fun to write in, so they’re unloved and typically unarchitected.
Argo incentivises you to separate the workflow code (workflows are built up of argo kubernetes resources using yaml) from the job code (written in any language, packaged as a container to run in kubernetes). In this way you can take a mess of spaghetti batch code, and turn it into simple (dare I say reusable) components, orchestrated by argo.
This is a hard one to explain concisely as it encompases a whole host of smaller benefits, such as:
Because the components are packaged as containers and run on kubernetes, it doesn’t matter what language they’re written in. The aim here is to have each component be responsible for a single task and to have a straightforward interface (often json).
Easy To Test
I haven’t done any automated testing of containers yet, but the same idea of simple components would lend itself very well to automated testing.
Your jobs only use resources which they’re running, and it’s easy to spin up multiple copies of a job if you want things to run in parallel. Argo describe this as putting “a cloud-scale supercomputer at your fingertips”.
The Case Against Argo Workflows
It’s Not Yet Mature
It’s a new project, it’s currently being worked on heavily which means things are changes, some features haven’t been built yet. You’re likely to encounter some bugs in the newest features, but you can expect them to be fixed fairly quickly.
The Community Isn’t Huge
If you’re anything like me you’ll google for answers as soon as you have a question. For argo, you can’t (yet) expect google to have all the answers neatly packaged up for you in a Stack Overflow Q&A. Instead you should expect to have to read the docs and the github issues.
It’s Another Tool To Learn
Whenever you’re bringing another tool in to an enterprise you need to consider the cost of supporting that tool and training other developers on it. This isn’t a drawback unique to argo, but you do need to ensure the advantages given above are significant enough to justify the cost.
A responsive team: It is being worked on heavily and the guys doing the work are responsive. I’ve had same day responses to issues I’ve raised on github and usually when I’ve spotted something I’m missing, that feature is already being worked on. There’s also an active slack channel.
It’s fairly simple: The scope of the project is quite narrow and it’s building on the amazing piece of work that is kubernetes for most of the heavy lifting. In essence this means that you’re unlikely to hit a roadblock as you can usually get what you want done using kubernetes features even without argo (argo helps though!).
I’m a fan of Argo, I think the concept is great one and I can see the benefits it can bring to any organisation with a complicated back end. The idea of bringing good coding habits to unloved batch code, while giving me the freedom to write components in my language of choice fills me with joy.
I hope this article has provided a good enough introduction to Argo to whet your appetite. If you’re looking for more info stick around for part 2 where I’ll be going a bit more in depth into how to configure argo to get the best out of it.