Consulting and Development at S-Markt und Mehrwert

TL;DR

  • Technologies: Ansible, Docker, Consul, Python, Kubernetes, NodeJS, Magnolia CMS, Prometheus, Grafana
  • Role: External developer, architect, and consultant driving DevOps transformation and implementing critical infrastructure
  • Key learning: Successful transformation requires not just technical solutions but tailored training and understanding of organizational readiness

Working as an external consultant for S-Markt und Mehrwert offered a unique opportunity to wear multiple hats - from process analyst to trainer, from infrastructure architect to backend developer. This diversity of responsibilities provided insights into how financial service providers modernize their technology stacks while managing regulatory constraints and organizational change.

Analyzing the development landscape

My engagement began with a comprehensive analysis of the existing development processes. Through structured developer interviews, I mapped the current state of their software development lifecycle, identifying pain points and bottlenecks that hindered efficiency. The resulting report didn't just highlight problems; it provided a roadmap for transformation, prioritizing improvements based on impact and feasibility.

This analytical phase taught me that technical debt isn't just about code - it's about processes, knowledge gaps, and organizational structures. Understanding the human elements behind technical challenges proved essential for proposing solutions that teams would actually adopt.

Building DevOps capability through education

Recognizing that sustainable change requires knowledge transfer, I designed and delivered multiple training programs tailored to the team's needs. The curriculum covered essential DevOps topics: CI/CD pipelines, test-driven development, Kubernetes fundamentals, Docker containerization, and Git workflows. Each training session was customized based on the participants' current knowledge level and the strategic plans for technology adoption.

The Kubernetes workshop deserves special mention. To make abstract concepts tangible, I built a custom Raspberry Pi cluster that served as our live Kubernetes environment. This hands-on approach transformed theoretical knowledge into visceral understanding. Participants could literally pull the power cord from a node and watch Kubernetes respond - seeing firsthand what happens when a master node fails versus an agent node. The physical connection between hardware and software sparked genuine excitement and "aha" moments among participants. Watching their faces light up as they grasped how Kubernetes orchestration actually works remains one of my favorite training memories.

Beyond the Pi cluster, these sessions worked with real examples from their codebase, built actual pipelines they would use, and containerized their applications. The goal was to demystify DevOps practices and make them accessible to developers who had worked primarily in traditional enterprise environments.

Infrastructure as code for Messenger Service

The Messenger Service project presented an interesting architectural challenge. Initially, I advocated for Kubernetes within the organization - the use case seemed perfect for it: dynamically scheduling and managing containers for phone numbers, exactly what Kubernetes was designed for. It was a textbook Kubernetes scenario.

However, Kubernetes wasn't available in their environment. This constraint led to a more pragmatic approach: building an MVP cluster management solution using Ansible scripts combined with Consul for service discovery. Python served as the glue layer, creating API wrappers and filling gaps where standard tooling lacked required functionality. Grafana and Prometheus provided comprehensive monitoring and alerting.

The result was enlightening. We successfully achieved our goals without Kubernetes, and in doing so, avoided the complexity that comes with operating a full Kubernetes cluster. This experience reinforced an important lesson: sometimes the "perfect" solution isn't the practical one. Our custom-built orchestration with Ansible and Consul proved that understanding the actual requirements and working within constraints can lead to simpler, more maintainable solutions that still deliver the needed functionality.

Modernizing HaspaJoker's backend

Working on HaspaJoker involved both evolution and revolution. The backend for frontend (BFF) layer in NodeJS needed enhancement to better serve the mobile and web clients. Meanwhile, the headless Magnolia CMS required significant modernization to meet contemporary content management needs.

The CMS work was particularly interesting. Beyond standard upgrades and API optimizations, I implemented a sophisticated notification system with customizable logic and comprehensive monitoring. Users could receive personalized notifications based on their preferences and behaviors, while administrators had full visibility into delivery status and engagement metrics. The thumbnail service I developed dramatically improved page load times by automatically generating and caching optimized images for different device types.