Blogbeiträge von Martin Grotzke

Our systems today are typically distributed, and sometimes integrated via an event bus such as Kafka. We store data in a database and publish events to inform other systems of changes. For example, the system that stores a Thing is eventually consistent with the other systems that consume the ThingCreated event. This means that at some point the other systems will be in the state that they should reach when they find out about the new Thing. When systems fail to achieve this level of consistency, it often requires significant time for analysis, troubleshooting and consistency restoration. We would like to save ourselves this time and instead develop correct systems.

Wir haben Neuigkeiten, ihr Lieben! Vor ungefähr elf Jahren haben wir inoio gegründet, um in einem gesunden Arbeitsumfeld innovative Softwarelösungen für Unternehmen zu entwickeln. Wir haben uns seitdem zu einem kleinen und feinen Unternehmen entwickelt, das durch seine Kreativität, seinen Tatendrang und Authentizität begeistert. Und das alles dank unserer Kultur der Selbstorganisation und Mitbestimmung.

Kafka Fundamentals

Because we’re using Apache Kafka again and again in our projects and so far I didn’t find the “most important things” sufficiently compact in one place - I have taken the time to prepare this for myself/us/you. The “most important things” for me are the basic concepts and some configuration properties for brokers/producers/consumers that one should know when choosing trade-offs from e.g. consistency/durability and availability.

In our last weekly knowledge sharing session at inoio, we discussed our experiences and thoughts on how to design kafka topics. I.e. how to decide to which topic a (new) event type should be published. In customer projects sometimes we’ve seen that Kafka topics have not been chosen properly, so that later this had to be changed. Since mostly several systems / teams are affected by such a change, this causes quite some effort. So it is better to invest some time in this decision in advance. How to choose the Kafka topic for an event doesn’t seem to be discussed that much in the public, therefore we want to share our thoughts on that here.

Danke 2020!

Jetzt hat’s mich doch gepackt, und ich möchte einmal einen Blick auf inoio in diesem Jahr 2020 festhalten. Für uns selbst, aber auch für andere interessierte.

This post is about Cassandra’s batch statements and which kind of batch statements are ok and which not. Often, when batch statements are discussed, it’s not clear if a particular statement refers to single- or multi-partition batches or to both - which is the most important question IMO (you should know why after you’ve read this post).

In this post I’m going to describe an issue we experienced with nginx and its handling of Server Side Includes (SSIs). We saw that nginx at first decodes the SSI URI path and afterwards encodes it when loading the resource. And in some cases, the URI path encoded by nginx was different than the original one. The solution is easy (use query parameters if in doubt), but I thought I’d share this so that others maybe don’t run into this issue and/or see how to debug such things.

Here’s a short post with linked slides and the recording of our first Reactive Systems Hamburg Meetup, where Martin Krasser compared the Event-Sourcing/CQRS tools Akka Persistence (which he also authored, as successor of his Eventsourced lib) and Eventuate (which he’s now building for Red Bull Media House to support a globally distributed system).

Jetzt ist es offiziell: Seit 6 Monaten arbeiten wir von inoio zusammen mit weiteren Dienstleistern und Galeria ­­­­­­­­­­Kaufhof an deren neuer Multi-Channel Online Plattform - Projektname “Jump”. Mit dem neuen System soll die Time-to-Market erheblich reduziert werden, wenn es um die Einbindung und Entwicklung neuer Features geht.

In our current project we have to consume a REST web service that provides data as a multipart document, e.g. a list of videos (or video metadata) where each video is a single part. While it’s common to submit or handle multipart requests (e.g. multipart/form-data), the multipart content type is not widely used for http responses. In consequence, the support of http clients for multipart responses is not as good as for requests. The Play Frameworks WS client for example does not directly support responses of type multipart/*.

We develop custom software, sometimes from scratch, sometimes it will be included in existing ecosystems. In both cases, the customer oftentimes wants to fix which technology we should use. We take the customer’s requests into consideration a lot, but try not to give him what he wants, but the best overall solution given the parameters of the project. Oftentimes we recommend Scala as a programming language, despite the fact that customers want Plain Old Java.

As this has been asked on the play mailinglist this post shall guide you through the setup of MyBatis with Play2. For the integration of MyBatis and Play we’re using the MyBatis-Guice subproject, so we can inject MyBatis mappers into managed play controllers (currently only documented in Play 2.1 Highlights) - or mappers into repositories into services into controllers if you like ;-)