I build data systems
that think, scale,
and ship.

Data Platform Engineer · AWS · CrewAI · Strands SDK · Building in Public

Architecting cloud-native financial systems. Scaling AWS payment platforms processing 13M+ records daily. Building in public.

Download Resume

About

Who I am

Viswanath Nagarajan

Viswanath Nagarajan

Data Platform Engineer · San Antonio, TX

Data Platform Engineer specializing in cloud-native financial systems and AI-augmented engineering. Architected agentic frameworks using CrewAI/Strands SDK to reduce engineering cycles from weeks to hours. Scaling AWS payment platforms processing 13M+ records daily and building AgentFlow — an open-source agentic ETL framework.

PythonAWSPySparkSparkAirflowCrewAIStrands SDKTerraformdbt
vishwa.n0998@gmail.com
San Antonio, TX

0+

Years Experience

0+

Projects Delivered

0M+

Records Processed

Work

Featured projects

Click any card to expand the full case study.

AI Orchestration Framework

+
  • 95% reduction in onboarding time
  • Multi-agent orchestration
  • Autonomous code conversion

Architected agentic AI framework using CrewAI and Strands SDK with multi-agent orchestration to autonomously convert legacy code.

CrewAIStrands SDKPythonAWS

Unified Payment Platform

+
  • 13M+ records processed daily
  • 15-minute Lambda execution windows
  • Exactly-once guarantee across channels

Serverless platform unifying multi-channel payment processing into a single AWS architecture with stateful orchestration and exactly-once transaction guarantees.

AWS LambdaStep FunctionsDynamoDBS3Python

Financial Institution Migration Engine

+
  • Automated FI onboarding
  • Data ingestion pipeline
  • Migration engine

Migration engine for automated financial institution onboarding and data ingestion.

AWSPythonAirflowSparkTerraform

Building in public

Open source + side projects

Every tool I wish existed when I needed it. Built in the open, shipped continuously.

● Live

AgentFlow

Stop writing DAGs. Define goals.

AI-native ETL orchestration framework. Define your pipeline in plain English — agents handle orchestration, failure recovery, schema validation, and documentation. dbt meets CrewAI.

PythonCrewAIStrands SDKAWSAirflow
View on GitHub →
◐ Building

Austin 311 Analytics

Real-time city intelligence, open data.

Live streaming analytics platform ingesting Austin Open Data 311 service requests. Kinesis → Lambda → Redshift pipeline with a React dashboard showing neighborhood trends, response-time SLAs, and anomaly detection.

KinesisLambdaRedshiftReactPython
View on GitHub →
○ Coming soon

Next project

Something is brewing — stay tuned.

Building in public means shipping continuously. The next tool is in the ideation phase — follow on GitHub or LinkedIn to see it take shape.

Flagship project

AgentFlow

Stop writing DAGs. Define goals.

Why I built this

After 4 years of writing data pipelines I kept running into the same problem: DAGs describe what you wrote, not what you want. The intent lives in the README. The code drifts. They never sync.

While experimenting with CrewAI and the Strands SDK for multi-agent orchestration, I realized the same pattern — describe a goal, agents handle the how — could completely replace the way we write ETL pipelines. So I built AgentFlow: plain English in, typed pipeline steps out. No eval. No codegen. Real output.

It's early. It's rough in places. And it already cuts pipeline onboarding from hours to minutes.

⭐ Star us

GitHub Stars

MIT License

Open Source

Active Dev

Status

Roadmap

v0.1

Alpha — Core engine + Airflow backend

Shipped

v0.2

Self-Healing — Agent-driven failure recovery

In progress

v0.3

Cloud Native — AWS Step Functions backend

v1.0

GA — Prefect + Dagster backends, docs site

# AgentFlow

from agentflow import Pipeline

pipeline = Pipeline.from_goal(

"Extract customers from Postgres daily,

validate schema, load to Redshift,

alert Slack if error rate > 2%."

)

Career

Experience

Southwest Business Corporation (SWBC)

Apr 2024 – Present
  • Scaling AWS payment platforms
  • Researching Blockchain integration for immutable, decentralized data protocols
  • Building cloud-native financial systems

JerseySTEM

Jul 2023 – Mar 2024
  • 30% ETL performance improvement
  • Data pipeline optimization

Xforia Solutions

Jan 2023 – Jun 2023
  • 3TB+ database optimization
  • 25% query performance improvement

Wipro

Jul 2020 – Aug 2021

Financial data pipelines with Airflow, Spark, Hadoop

Background

Education & Certifications

Degrees

M.S. in Computer Science

New York University (NYU)

Sept 2021 – May 2023

B.Tech in Computer Science & Engineering

SRM University

Aug 2016 – May 2020

Certifications

AWS Certified Cloud Practitioner (CCP)

Certified
Verify credential →

AWS Certified Data Engineer

In progress

Skills

Tech stack

Hover a badge for proficiency level and context.

Platforms

AWS Lambda
AWS Step Functions
AWS S3
AWS DynamoDB
AWS Glue
AWS Redshift
Snowflake
Spark
Airflow

Programming

Python
SQL
PySpark
Bash
Java

AI & Automation

CrewAI
Strands SDK
Amazon Q
GitHub Copilot

DevOps & Reliability

Terraform
CloudFormation
Docker
CI/CD
CloudWatch

Writing

Recent posts

All posts →

I post weekly on LinkedIn

Deep dives on AI-native data engineering, AgentFlow updates, and what I'm learning building in public. No fluff.

Follow on LinkedIn →