Redet man über Mitbestimmung der Mitarbeiter:innen in einer Firma, kommt irgendwann zwangsläufig das Thema Transparente Gehälter: Wer darf die Gehälter einsehen, und wer soll darüber entscheiden?
There is a well known rule of thumb among microservices advocates that you should never share a database between microservices. This rule is, in my humble opinion, categorically wrong and still in most cases right. Software architecture is all about making trade-offs and so, anyone that considers themselves an architect should not take such rules at face value. This blog post is all about the trade-off.
This is part 2 in a series of blog posts on macro metaprogramming in Scala 3. (Click here for part 1) In the previous part I have introduced the two macro APIs as well as several related concepts of metaprogramming with type families and implicits. If you haven’t read it already, you should do so now as the rest of the article won’t be understandable without it. In this second part, we will apply all our knowledge to a practical example and learn how to generate new classes with macros. Quite a bit of arcane magic is necessary to make this possible and it is my goal of this blog series to share with you all the tricks that I have worked out to maneuver around limitations of the compiler.
With the release of Scala 3, one of the biggest changes to the language revolves around metaprogramming: Inline-functions, match types, generic-programming tools like tuple types and Mirror
s, as well as a new macro API have been added to make code generation a major concern of Scala. Naturally, one of the first things you may want to do is generate new classes, which is much harder than it sounds. My goal for this series of blog posts is to teach you all of the secret tricks to work around macro limitations and obtain the forbidden fruit (or something close to it). Part 2 can be found here.
In a past project, the customer utilized Prisma Cloud Compute for scanning running containers for known vulnerabilities (This is not endorsement of this particular software, just the one the customer decided to employ). In theory, it provided a detailed view of the container patch level within the organization. However, the end result was often one of two options:
Kafka offers two cleanup policies, which seems simple enough: “delete”, where data is deleted after a certain amount of time, and “compact”, where only the most recent value is kept for any key. But what if data is not deleted/compacted as expected?
Our systems today are typically distributed, and sometimes integrated via an event bus such as Kafka. We store data in a database and publish events to inform other systems of changes. For example, the system that stores a Thing
is eventually consistent with the other systems that consume the ThingCreated
event. This means that at some point the other systems will be in the state that they should reach when they find out about the new Thing
. When systems fail to achieve this level of consistency, it often requires significant time for analysis, troubleshooting and consistency restoration. We would like to save ourselves this time and instead develop correct systems.
Wir sind Thema bei der aktuellen Ausgabe des Podcasts Expedition Arbeit. Im Format O-Ton Arbeit hat Jürgen Alef uns interviewt und beschreibt seine Eindrücke in unsere Arbeitswelt.
Murphy’s Law sagt: “Anything that can go wrong will go wrong” - wenn auf etwas Verlass ist, dann auf den Fehlerteufel. Deshalb schauen wir uns an, wie wir in den Kafka-Consumern die Event-Verarbeitung mit Retries robuster bauen können. Wir benutzen im Projekt Kafka mit Kotlin und spring-kafka, die grundlegenden Konzepte lassen sich aber auch auf andere Systeme übertragen.
Wir haben Neuigkeiten, ihr Lieben! Vor ungefähr elf Jahren haben wir inoio gegründet, um in einem gesunden Arbeitsumfeld innovative Softwarelösungen für Unternehmen zu entwickeln. Wir haben uns seitdem zu einem kleinen und feinen Unternehmen entwickelt, das durch seine Kreativität, seinen Tatendrang und Authentizität begeistert. Und das alles dank unserer Kultur der Selbstorganisation und Mitbestimmung.
Because we’re using Apache Kafka again and again in our projects and so far I didn’t find the “most important things” sufficiently compact in one place - I have taken the time to prepare this for myself/us/you. The “most important things” for me are the basic concepts and some configuration properties for brokers/producers/consumers that one should know when choosing trade-offs from e.g. consistency/durability and availability.
Fragestellungen aus der Praxis mit Git, die zunächst verwirren, aber doch relativ leicht zu lösen sind… wenn man weiß wie.