Low-Code API-first practical approach to building insurance claim handling process

Low-Code and API-first are a very hot topics at the moment. In this article I’d like to show you a practical example of how to design APIs for some basic insurance claim handling process, how to mock this APIs and finally how to start experimenting with your business process to identify bottlenecks and possible pitfalls. Remember, fail-fast is the king before you start any initiative, be it small refactoring or global enterprise transformation.

We will need zero Java / Scala / Kotlin codig skills, though we will learn modern technological stack wich enables all this low-code beauty: Apicurio Studio, Apicurio Schema Registry, Microcks, Camunda, Kubernetes, Docker, Java, Spring Boot and some other technologies. We will be using both OpenAPI Specification or AsyncAPI Specification to define our APIs.

Intro

Let’s define our starting point:

An insurance claim is a formal request by a policyholder to an insurance company for coverage or compensation for a covered loss or policy event. The insurance company validates the claim (or denies the claim). If it is approved, the insurance company will issue payment to the insured or an approved interested party on behalf of the insured

https://www.investopedia.com/terms/i/insurance_claim.asp

In a very simplified form it may look like this:

Please refer to Business Process Modelling Notation (shortly BPMN) if you need more information about notation used to create this diagram.

If you prefer a practical approach, please check these awesome videos and give it a try yourself using Camunda Modeler or lucidchart.

Each task and decision gateway typically represents a call to a particular business component. It can be anything from a single microservice to a huge system containing hundreds of microservices itself. Or it can be a 30 years old mainframe-based monolith, written in COBOL. The chances of this are actually quite high

This is the very first point where API-first approach helps a lot. We can hide all the complexity of underlying business systems and let domain experts help us with actual implementation.

Let’s define APIs for our components (or Service Tasks if we’re using BPMN). Not all domain experts know OpenAPI Specification or AsyncAPI Specification so why not using some help from visual modelling tools like Apicurio Studio? The easiest way to try it is to use demo-version: https://studio.apicur.io

APIs overview

In this article we will be modelling following APIs:

  1. Claim Registration API
  2. Assessment API
  3. Payment API
  4. Premiums API
  5. Event polling API

Claim Registration API

This API provides claim registration capability. Here is how I use https://studio.apicur.io to model claim registration:

I like this tool a lot, cause your collaborate real-time with other stakeholders, API designers, architects or developers.

After that I extend Claim Data Type in order to store several business attributes:

required:
    - insuranceId
    - timestamp
    - description
type: object
properties:
    insuranceId:
        type: string
    timestamp:
        format: date-time
        type: string
    description:
        type: string
example:
    insuranceId: ABC123
    timestamp: '2018-02-10T09:30Z'
    description: My car was damaged because of flooding

and Registration Data Type:

description: ''
required:
    - claimId
type: object
properties:
    claimId:
        description: ''
        type: string
example:
    claimId: CLAIM-123

Business operation now looks like

post:
    requestBody:
        content:
            application/json:
                schema:
                    $ref: '#/components/schemas/Claim'
        required: true
    responses:
        '200':
            content:
                application/json:
                    schema:
                        $ref: '#/components/schemas/Registration'
            description: Returned if claim has been successfully registered
    operationId: register
    summary: Register insurance claim

As you can see, policyholder files insurance Claim and provides his insuranceId, timestamp and description. Claim Registration API returns back claimId so policyholder can later check claim status.

You can find full API Document here: https://github.com/tillias/api-first-insurance/blob/main/claim-registration-openapi.yml

Assessment API

Assessment business component will have 2 APIs. One is synchronous, responsible for triggering assessment:

openapi: 3.0.2
info:
  title: Assessment API
  version: 1.0.0
  description: Used to assess (validate) insurance claim(s)
paths:
  /assess:
    post:
      responses:
        '200':
          content:
            application/json:
              schema:
                $ref: '#/components/schemas/Assessment'
              examples:
                Samle assessment:
                  value:
                    assessmentId: ASSESS-123
          description: Returned if assessment was successfully started
      operationId: assess
      summary: Start assessment (validation) of the claim
    parameters:
      -
        examples:
          Example Claim:
            value: '"ABC-123"'
        name: claimId
        description: Claim Id to be validated
        schema:
          type: string
        in: query
        required: true
  /status:
    summary: Checks status of assessment
    get:
      parameters:
        -
          examples:
            Example assessment id:
              value: '"ASSESS-123"'
          name: assessmentId
          description: Id of assessment
          schema:
            type: string
          in: query
          required: true
      responses:
        '200':
          content:
            application/json:
              schema:
                enum:
                  - Pending
                  - Running
                  - Finished
                  - Error
                type: string
              examples:
                Pending status:
                  value: '"Pending"'
                Running status:
                  value: '"Running"'
          description: If assessment was found
      operationId: status
      summary: Check status of assessment
    parameters:
      -
        examples:
          Example assessment id:
            value: '"ASS-123"'
        name: assessmentId
        description: Id of assessment
        schema:
          type: string
        in: query
        required: true
components:
  schemas:
    Assessment:
      description: ''
      required:
        - assessmentId
      type: object
      properties:
        assessmentId:
          description: >-
            Represents assessment number for particular insurance claim. User can use this to
            check status of assessment
          type: string
      example:
        assessmentId: ASSESS-123

You can find full API Document here: https://github.com/tillias/api-first-insurance/blob/main/assessment-openapi.yml

If you look into document, you will notice the second very important point of OpenAPI Specification. Even non-technical people (after some practice of course) find these documents self-explanatory.

Let’s describe it however one more time:

API Client triggers process by passing claimId to the API. Since assessment (validation) may take a long time, it makes absolute sense to accept request and send assessmentId back. It is like ticket number. Client can anytime check assessment status:

The more you use OpenAPI Specification the more natural it feels. You can additionally preview documentation using either Apicurio Studio or https://editor.swagger.io

Assessment asyncAPI

Assessment routine may take a long time. For the sake of simplicity let’s assume after routine is finished our assessment module may send one of two possible events:

  1. Claim accepted event
  2. Claim rejected event

Let’s describe this API using AsyncAPI Specification. I will be using http://editor.asyncapi.org

asyncapi: '2.1.0'
info:
  title: Assessment Service
  version: 1.0.0
  description: This service is in charge of assessment process
channels:
  claim/accepted:
    subscribe:
      message:
        $ref: '#/components/messages/ClaimAccepted'
  claim/rejected:
    subscribe:
      message:
        $ref: '#/components/messages/ClaimRejected'
components:
  messages:
    ClaimAccepted:
      payload:
        type: object
        properties:
          claimId:
            type: string
            description: Id of a claim
          email:
            type: string
            format: email
            description: Official in charge
        example: {
          claimId: ABC-123,
          email: john-doe@foo.com
        }
    ClaimRejected:
      payload:
        type: object
        properties:
          claimId:
            type: string
            description: Id of a claim
          email:
            type: string
            format: email
            description: Official in charge
          reason:
            type: string
            description: Reason of rejection
        example: {
          claimId: ABC-123,
          email: john-doe@foo.com,
          reason: Client provided invalid insurance policy number
        }

Documentation is self-explanatory. Assessment Service will send ClaimAccepted message into claim/accepted channel or ClaimRejected message into claim/rejected channel. So, if some other client want to receive such a messages, it should subscribe

ClaimAccepted contains claimId and email of responsible employee. ClaimRejected contains additionally reason for rejection.

Both APIs are available on github:

Payment API

Let’s use OpenAPI document directly to document our next Payment API. It enables following capabilities:

  1. Calculates payment amount
  2. Performs actual payment (by possibly invoking other APIs: PayPal API, Klarna API etc)

Note how multiline descriptions are organized

openapi: 3.0.2
info:
  title: Payment API
  version: 1.0.0
  description: |-
    This API provides methods of:
    - calculating payment amount
    - perform payment
paths:
  /amount:
    get:
      responses:
        '200':
          content:
            application/json:
              schema:
                $ref: '#/components/schemas/Amount'
              examples:
                Sample amount:
                  value:
                    currency: Euro
                    amount: '225.5'
          description: 'If claim is available, valid and amount is calculated'
      operationId: calculateAmount
      summary: Calculates payment amount for a given claim
    parameters:
      -
        examples:
          Sample claimId:
            value: '"ABC-123"'
        name: claimId
        schema:
          type: string
        in: query
        required: true
  /payment:
    post:
      requestBody:
        content:
          application/json:
            schema:
              $ref: '#/components/schemas/Payment'
            examples:
              Example payment:
                value:
                  insuranceId: INS-123
                  claimId: ABC-123
                  amount:
                    currency: Euro
                    amount: '255.5'
        required: true
      responses:
        '200':
          content:
            application/json:
              schema:
                type: string
              examples:
                Transaction id:
                  value: TRANSACTION-7f681d66-1767-4fa6-b6bc-208cd74f59a6
          description: Returned if payment was successful
      operationId: executePayment
      summary: Performs payment
components:
  schemas:
    Amount:
      required:
        - currency
        - amount
      type: object
      properties:
        currency:
          type: string
        amount:
          type: string
      example:
        currency: Euro
        amount: '225.5'
    Payment:
      required:
        - insuranceId
        - claimId
        - amount
      type: object
      properties:
        insuranceId:
          description: 'Use to identify payment recipient '
          type: string
        claimId:
          type: string
        amount:
          $ref: '#/components/schemas/Amount'
          description: ''
      example:
        insuranceId: INS-123
        claimId: ABC-123
        amount:
          currency: Euro
          amount: '255.5'

If you still struggle understanding it, no worries! Copy OpenAPI document and use https://editor.swagger.io or https://studio.apicur.io to visualize it. Give it a try yourself and check it out:

https://editor.swagger.io/?url=https://raw.githubusercontent.com/tillias/api-first-insurance/main/payment-openapi.yml

Notification asyncAPI

Let’s introduce pure asyncAPI for notification service. The main goal of this service is to send physical notifications when insurance claim was rejected by insurer. This can be e-mail, physical mail, phone call or any combinations of those.

asyncapi: '2.1.0'
info:
  title: Notification Service
  version: 1.0.0
  description: This service performs payment processing for insurance claim
channels:
  claim/accepted:
    publish:
      message:
        $ref: '#/components/messages/ClaimRejected'
components:
  messages:
    ClaimRejected:
      payload:
        type: object
        properties:
          claimId:
            type: string
            description: Id of a claim
          email:
            type: string
            format: email
            description: Official in charge
          reason:
            type: string
            description: Reason of rejection
        example: {
          claimId: ABC-123,
          email: john-doe@foo.com,
          reason: Client provided invalid insurance policy number
        }

It is very similar to Assessment asyncAPI except publish is used instead of subscribe. This means that client of this API can publish message(s) into claim/accepted channel and trigger Notifications API using event. It is called async invocation.

This is very powerful pattern, cause Notification Service is completely decoupled from all other components. Whoever sends ClaimRejected can trigger those notifications.

Careful readers may also notice that both Assessment asyncAPI and Notification asyncAPI use the same ClaimRejected schema. We will return to this topic in the API Refactoring chapter.

Premiums API

Let’s define our last Premiums API. It enables following capabilities:

  • checking if future rates for particular insurance holder should be changed
  • change future insurance premiums
openapi: 3.0.2
info:
  title: Premiums API
  version: 1.0.0
  description: |-
    Enables following capabilities:
    - Checking if future rates for particular insurance holder should be changed
    - Change future insurance premiums
paths:
  /premiums/validate:
    post:
      requestBody:
        content:
          application/json:
            schema:
              $ref: '#/components/schemas/ValidationRequest'
            examples:
              Sample request:
                value:
                  claimId: ABC-123
                  email: john.doe@foo.com
        required: true
      responses:
        '200':
          content:
            application/json:
              schema:
                $ref: '#/components/schemas/RateChange'
              examples:
                Sample rate change:
                  value:
                    increase: true
                    currency: Euro
                    amount: '15.5'
                    claimId: ABC-123
          description: If validation was successful
      operationId: validatePremiums
      summary: Checks if future rates for a particular insurance holder should be changed
      description: |-
        If insurance claim if valid and accepted by insurer, it may affect future premiums.
        In most cases it is some increase in monthly payments.
  /premiums:
    put:
      requestBody:
        content:
          application/json:
            schema:
              $ref: '#/components/schemas/RateChange'
            examples:
              Sample rate change:
                value:
                  increase: true
                  currency: Euro
                  amount: '15.5'
                  claimId: ABC-123
        required: true
      responses:
        '200':
          content:
            application/json:
              schema:
                type: string
              examples:
                New rate:
                  value: '"100.3"'
          description: if premiums are changed
      operationId: changePremiums
      summary: Adjust future premiums
components:
  schemas:
    ValidationRequest:
      required:
        - claimId
        - email
      type: object
      properties:
        claimId:
          type: string
        email:
          description: Official in charge for claim processing
          type: string
      example:
        claimId: ABC-123
        email: john.doe@foo.com
    RateChange:
      required:
        - increase
        - currency
        - amount
        - claimId
      type: object
      properties:
        increase:
          description: true if future premiums will be bigger
          type: boolean
        currency:
          type: string
        amount:
          type: string
        claimId:
          type: string
      example:
        increase: true
        currency: Euro
        amount: '15.5'
        claimId: ABC-123

API Refactoring

Yes, it makes absolute sense to do refactoring when you design your APIs. As already mentioned before both Assessment asyncAPI and Notification asyncAPI use the same ClaimRejected data type. Fortunately it is possible to extract and reference it:

asyncapi: '2.1.0'
info:
  title: Assessment Service
  version: 1.1.0
  description: This service is in charge of assessment process
channels:
  claim/accepted:
    subscribe:
      message:
        payload:
          $ref: 'https://raw.githubusercontent.com/tillias/api-first-insurance/main/claim-accepted-schema.json'
  claim/rejected:
    subscribe:
      message:
        payload:
          $ref: 'https://raw.githubusercontent.com/tillias/api-first-insurance/main/claim-rejected-schema.json'

Now payload for both ClaimAccepted and ClaimRejected messages is referenced using external URI. In real world it will be your Schema Registry storing and managing Schemas instead of a git repo.

Let’s reuse ClaimRejected within a Notification API:

asyncapi: '2.1.0'
info:
  title: Notification API
  version: 1.0.0
  description: notifies insurance holder if insurance claim was rejected by insurer
channels:
  claim/accepted:
    publish:
      message:
        payload:
          $ref: 'https://raw.githubusercontent.com/tillias/api-first-insurance/main/claim-rejected-schema.json'

Starting from Version 3.1.0 (release in February 2021) the openAPI specification supports 100% compatibility with the latest draft (2020-12) of JSON Schema.

This is a huge step towards async <-> sync compatibility. You can finally define your data types and reuse them whether you going to use sync or async approach.

Let’s make Premiums API using ClaimAccepted

openapi: 3.0.2
info:
  title: Premiums API
  version: 1.0.0
  description: |-
    Enables following capabilities:
    - Checking if future rates for particular insurance holder should be changed
    - Change future insurance premiums
paths:
  /premiums/validate:
    post:
      requestBody:
        content:
          application/json:
            schema:
              $ref: 'https://raw.githubusercontent.com/tillias/api-first-insurance/main/claim-accepted-schema.json'
...
        

You can check refactored API by clicking https://editor.swagger.io/?url=https://raw.githubusercontent.com/tillias/api-first-insurance/main/premiums-openapi.yml

Publishing your APIs

Now we have covered both sync and async APIs and it is time to publish them.

It can be either your favorite API Management Solution, or some Open Source Tool, like Apicurio Schema Registry (it is also available under commercial support as a part of Red Hat Integration)

The registry at the moment of writing this article supports adding, removing, and updating the following types of artifacts: OpenAPIAsyncAPIGraphQLApache AvroGoogle protocol buffersJSON SchemaKafka Connect schemaWSDLXML Schema (XSD)

I have prepared docker compose which you can use to start it locally.

I have already imported all APIs into it:

Apicurio Schema Registry

Data Model evolution

Every API evolves. Data types we defined before are a part of various APIs and this brings new challenges.

What will happen if we rename some field in ClaimAccepted? We will inevitable break all consumers of Premiums API and Assessment asyncAPI

One of the biggest advantages of using Schema Registry is that you can activate compatibility rules for your data types:

I will enable “Backward” compatibility rule and try renaming “email” field:

Schema Registry will automatically validate if this can break compatibility with clients and won’t allow me doing this:

Mocking APIs

Now we’re ready with API design. Let’s bring it into life and enable our business to do some experimenting with the business process.

For those purposes I can highly recommend https://microcks.io

It will allow us to create mocks for our APIs with just a few clicks.

I have prepared docker compose which you can use to start it locally. Or just ask your IT department how to start it in your organisation, there are plenty of options over there

Here is a short video how you can create mocks for you APIs:

Mocking APIs with microcks

And this is git repo we have used in the video: GitHub – tillias/api-first-insurance: API-first practical approach to building insurance claims handling process

Mocking async APIs

At the moment of writing this article there are no easy low-code ways to prepare local infrastructure using docker compose only.

It is of course doable, but it requires k8s cluster and some programming skills to react on those events within a workflow engine itself.

I will cover this topic in one of my next articles. And right now we will use Polling Consumer. Yes even after 17 years Enterprise Integration Patterns are more than relevant, thanks Gregor Hohpe 

Here is how modified business process looks like:

insurance claim handling process (version with polling)

Poll Tasks (marked in green) will invoke supporting API:

openapi: 3.0.2
info:
  title: Event Polling API
  version: 1.0.0
  description: 'Provides methods of polling event broker for events '
paths:
  /events/claims/accepted:
    get:
      responses:
        '200':
          content:
            application/json:
              schema:
                $ref: 'https://raw.githubusercontent.com/tillias/api-first-insurance/main/claim-accepted-schema.json'
              examples:
                Sample request:
                  value:
                    claimId: ABC-123
                    email: john.doe@foo.com
          description: If event is available
        '404':
          description: No events available
      operationId: claimAccepted
      summary: Polls for ClaimAccepted event
  /events/claims/rejected:
    get:
      responses:
        '200':
          content:
            application/json:
              schema:
                $ref: 'https://raw.githubusercontent.com/tillias/api-first-insurance/main/claim-rejected-schema.json'
              examples:
                Sample request:
                  value:
                    claimId: ABC-123
                    email: john.doe@foo.com
                    reason: Client provided invalid insurance policy number
          description: If event is available
        '404':
          description: No events available
      operationId: claimRejected
      summary: Polls for ClaimRejected event

Such and API can be easily implemented using no-code Kafka Bridge or Confluent REST Proxy

Tying it all together with Camunda

I will use Camunda to orchestrate our business process. We will need Camunda Modeler to configure our APIs.

Let’s start Camunda BPM Engine. For this article I will be using Community Edition with Tomcat https://camunda.com/de/download

Unpack it and run start-camunda.bat (Windows) or start-camunda.sh on (Linux)

Now you can either point your Camunda Modeller to claim-process-polling.bpmn we downloaded in previous video.

Let’s bind our Tasks to the mocked APIs we crated before:

Consuming mocked APIs from business process

Final words

Designing APIs and data structures from my experience is one of the most challenging tasks. Using proposed approach we allow business departments experimenting, collaborating with architects and engineers and encouraging fail-fast culture.

And software engineering teams can use it as a valuable input in order to implement, test and improve all those APIs.

If you like it, follow me on twitter: https://twitter.com/tillias

Literature

https://www.infoq.com/articles/events-workflow-automation

https://blog.bernd-ruecker.com/the-7-sins-of-workflow-b3641736bf5c

https://github.com/rob2universe/camunda-http-connector-example

https://medium.com/@stephenrussett/handling-government-business-processes-across-administrative-divisions-digitalstate-406f86d4fd56

Event-Driven API with Apicurio Registry, Spring Boot, Apache Avro and Kafka

API management is important. All (well, almost all) use OpenAPI nowadays. But what about managing your async APIs? When we’re talking about microservices, the odds you’re using Kafka are very high. Do you manage schema of your events / commands?

In this post I would like show you how you can manage schema for your event-driven microservices using Apicurio Registry, Spring Boot, Apache Avro and Kafka.


As Michael Amundsen mentioned (thanks for allowing citing it) in his Moving from RESTful to EVENTful:

While RESTful systems focused on resources, EVENTful solutions focus on actions. And, more importantly, EVENTful solutions rely on asynchronous interactions. This opens up many more possibilities for building responsive, real-time solutions. That means an EVENTful design offers services the ability to request and respond in real time and to do it in a way that provides more flexibility in the way the data is shared, displayed, and mixed across devices and platforms. 

Mike Amundsen

While I’m still waiting for my print copy of his awesome Design and Build Great Web APIs: Robust, Reliable, and Resilient book I would like to describe my technical way to EVENTful with Apicurio Registry

Disclaimer: if you’re with Confluent, no worries! It works the same way with Confluent Schema Registry I would like to have independent opinion based on Open Source product though. And Apicurio Registry looks like a good match.

Update: you can use Confluent Registry as well and vice-versa. Both tools have their commercial version with first class support (see GA from RedHat or Confluent) or you can simply use it as it is with some minor limitations:

“The only limitation of CCL is that you cannot sell the Schema Registry as a service (say AWS Kafka SR). However, you can build a commercial application and use SR under the hood (without any royalty or something else)”

Kai Waehner

If you still don’t have Docker installed, get it now. We need docker-compose to setup infrastructure:

version: "3"
services:
  postgres:
    image: postgres
    environment:
      POSTGRES_USER: apicurio-registry
      POSTGRES_PASSWORD: password
  registry:
    image: apicurio/apicurio-registry-jpa
    ports:
      - 8081:8080
    environment:
      QUARKUS_DATASOURCE_URL: 'jdbc:postgresql://postgres/apicurio-registry'
      QUARKUS_DATASOURCE_USERNAME: apicurio-registry
      QUARKUS_DATASOURCE_PASSWORD: password
  zookeeper:
    image: 'bitnami/zookeeper:latest'
    ports:
      - '2181:2181'
    environment:
      - ALLOW_ANONYMOUS_LOGIN=yes
  kafka:
    image: 'bitnami/kafka:latest'
    ports:
      - '9092:9092'
    environment:
      - KAFKA_BROKER_ID=1
      - KAFKA_LISTENERS=PLAINTEXT://:9092
      - KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:9092
      - KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
      - ALLOW_PLAINTEXT_LISTENER=yes
    depends_on:
      - zookeeper

Save this as a docker-compose.yml or download it here. Then execute

docker-compose up

Check that Apicurio Registry is up and running by opening http://localhost:8081/api

We’re using apicurio/apicurio-registry-jpa for simplicity in our example, but you can choose between:

Now it’s time to create sample Spring-Boot project:

Or you can simply download it from GitHub: tillias/spring-boot-kafka-apicurio.

Navigate to /src/main/resources/avro/schema and create sample Avro Schema:

{
  "name": "Event",
  "namespace": "com.github.tillias.spbka.schema.avro",
  "type": "record",
  "doc": "Avro Schema for Event",
  "fields" : [ {
    "name" : "name",
    "type" : "string"
  }, {
    "name" : "description",
    "type" : "string"
  }, {
    "name" : "createdOn",
    "type" : "int",
    "logicalType": "timestamp-millis"
  }
  ]
}

With this schema in place we invoke code generation using avro-maven-plugin. Open pom.xml and add plugin into build section:

    <build>
        <plugins>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
            </plugin>
            <plugin>
                <groupId>org.apache.avro</groupId>
                <artifactId>avro-maven-plugin</artifactId>
                <version>${avro.version}</version>
                <executions>
                    <execution>
                        <phase>generate-sources</phase>
                        <goals>
                            <goal>schema</goal>
                        </goals>
                        <configuration>
                            <sourceDirectory>${project.basedir}/src/main/resources/avro/schema</sourceDirectory>
                            <includes>
                                <include>**/*.avsc</include>
                            </includes>
                            <outputDirectory>${project.build.directory}/generated-sources/avro/schema</outputDirectory>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>

This will automatically compile your schemas and generate sources under target\generated-sources\avro\schema\com\github\tillias\spbka\schema\avro:

Schema is ready, now it’s time to integrate Apicurio Registry. Add following dependencies into your pom.xml:

    <properties>
        <avro.version>1.10.0</avro.version>
        <apicurio.version>1.3.1.Final</apicurio.version>
    </properties>

    <dependencies>
        <dependency>
            <groupId>org.apache.avro</groupId>
            <artifactId>avro</artifactId>
            <version>${avro.version}</version>
        </dependency>
        <dependency>
            <groupId>io.apicurio</groupId>
            <artifactId>apicurio-registry-utils-serde</artifactId>
            <version>${apicurio.version}</version>
        </dependency>
        <dependency>
            <groupId>org.jboss.resteasy</groupId>
            <artifactId>resteasy-client</artifactId>
            <version>4.5.6.Final</version>
        </dependency>
        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
            <optional>true</optional>
        </dependency>

We need resteasy-client until https://github.com/Apicurio/apicurio-registry/issues/702 will be fixed. According to Eric Wittmann (it seems he is project lead):

For now we’ll have to document the issue (users might need the RE rest client dependency as well, depending on their application). The old client interface that uses the JAX-RS classes has been deprecated – so the next major release should clean the last of this up.

Now let’s add a glue between Spring Boot, Apicurio Registry and Kafka using some spring boot magic. Add following into your application.yaml

server:
  port: 8080
spring:
  kafka:
    consumer:
      bootstrap-servers: localhost:9092
      group-id: group_id
      auto-offset-reset: earliest
      key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      value-deserializer: io.apicurio.registry.utils.serde.AvroKafkaDeserializer
      properties:
        spring.json.trusted.packages: "com.github.tillias.spbka"
        apicurio:
          registry:
            use-specific-avro-reader: true
            url: http://localhost:8081/api
    producer:
      bootstrap-servers: localhost:9092
      key-serializer: org.apache.kafka.common.serialization.StringSerializer
      value-serializer: io.apicurio.registry.utils.serde.AvroKafkaSerializer
      properties:
        apicurio:
          registry:
            url: http://localhost:8081/api
            artifact-id: io.apicurio.registry.utils.serde.strategy.TopicIdStrategy

First of all we’re telling that Kafka listener is available on 9092. Then we define value-deserializer and value-serializer provided by Apicurio.

spring.json.trusted.packages for consumer is mandatory, otherwise our generated sources for Event won’t be handled properly.

use-specific-avro-reader will allow us using generated sources directly in consumer instead of GenericData.Record. More about it later in this article.

producer artifact-id property can be either

  • ArtifactIdStrategy to return an artifact ID
  • GlobalIdStrategy to return a global ID

Please refer to the documentation (section 9.4. “Strategies to lookup a schema”) for more details. I’m using here io.apicurio.registry.utils.serde.strategy.TopicIdStrategy for simplicity.

Let’s push our schema to Apicurio Registry:

curl --location --request POST 'http://localhost:8081/api/artifacts' \
--header 'Content-Type: application/json; artifactType=AVRO' \
--header 'X-Registry-ArtifactId: events-value' \
--data-raw '{
  "name": "Event",
  "namespace": "com.github.tillias.spbka.schema.avro",
  "type": "record",
  "doc": "Avro Schema for Event",
  "fields" : [ {
    "name" : "name",
    "type" : "string"
  }, {
    "name" : "description",
    "type" : "string"
  }, {
    "name" : "createdOn",
    "type" : "int",
    "logicalType": "timestamp-millis"
  }
  ]
}'

Notice following header:

X-Registry-ArtifactId: events-value

This is exactly where TopicIdStrategy plays important role:

The default strategy is TopicIdStrategy, which looks for Service Registry artifacts with the same name as the Kafka topic receiving messages.

I used curl for simplicity, and as Hugo Guerrero mentioned you can:

…also use the maven plugin to register your schema instead of the cURL POST call or event better, you can configure the serdes to automatically register it if it’s not found in the registry already

Hugo Guerrero

More about it in “Strategies to return an artifact ID” section.

Verify that schema is successfully created by opening http://localhost:8081/ui/artifacts/events-value/versions/latest

We’re finally ready writing our first producer:

public class Topic {
    public static final String NAME = "events";
}

@Service
@Slf4j
@RequiredArgsConstructor
public class Producer {

    private final KafkaTemplate<String, Event> kafkaTemplate;

    public void send(Event payload) {
        log.info("Producer sending message {} to topic {}", payload, Topic.NAME);
        this.kafkaTemplate.send(Topic.NAME, payload);
    }
}

And consumer:

@Service
@Slf4j
@RequiredArgsConstructor
public class Consumer {

    @KafkaListener(topics = Topic.NAME, groupId = "${spring.kafka.consumer.group-id}")
    public void consume(Event message) {
        log.info("Consumer consumed message {} from topic {}", message, Topic.NAME);
    }
}

I’m using couple of lombok annotations here: @Slf4j and @RequiredArgsConstructor.

Finally let’s add resource to trigger producer using HTTP POST:

@RestController
@RequiredArgsConstructor
@Slf4j
@RequestMapping(value = "/kafka")
public class Resource {

    private final Producer producer;

    @PostMapping(value = "/publish")
    public void publish(@RequestBody Event event) {
        log.info("REST Controller has received entity: {}", event);
        this.producer.send(event);
    }
}

We’re ready to start our spring-boot application and verify that everything is working fine:

curl --location --request POST 'http://localhost:8080/kafka/publish' \
--header 'Content-Type: application/json' \
--data-raw '{
    "name": "some name",
    "description": "some description",
    "createdOn": 1603214435
}'

You will see something like this in your console:

020-10-20 19:20:44.354  INFO 29016 --- [ctor-http-nio-3] com.github.tillias.spbka.Producer        : Producer sending message {"name": "some name", "description": "some description", "createdOn": 1603214435} to topic events
2020-10-20 19:20:44.393  INFO 29016 --- [ntainer#0-0-C-1] com.github.tillias.spbka.Consumer        : Consumer consumed message {"name": "some name", "description": "some description", "createdOn": 1603214435} from topic events

Now you can evolve you async API by enabling Validity Rules, Compatibility Rules or built-in versioning:

Backward- , Full-, Forward-compatibility, automatic validation… So many good stuff out of the box!

In real-world scenarios you can trigger Apicurio Registry via REST API instead of manual upload: https://www.apicur.io/registry/docs/apicurio-registry/assets-attachments/registry-rest-api.htm

I hope you enjoyed the stuff. Feel free to ping me on LinkedIn or twitter.

You can find all code from this article available on GitHub: tillias/spring-boot-kafka-apicurio.

Visualize graph data using vis-network and Angular

A week ago I have posted about my new open source project microservice-catalog. Today I’m pleased to introduce new major feature: Visualize dependencies between microservices

In modern world it is crucial understanding where your possible bottleneck is, how dependencies look like and control build- (or even deployment-) order of your services across various environments.

I have decided to use vis-network as a library for graph visualization. It is very powerful open-source tool with source code available on GitHub. I was also positively surprised with the reaction speed of the community on my questions / bug-reports.

First step is to install library itself into you angular project:

npm install vis-network

You also need to install few peer dependencies as well:

npm install @egjs/hammerjs
npm install keycharm
npm install vis-data
npm install vis-util

Now it is time to introduce brand new angular component:

ng generate component dependency-dashboard

Open dependency-dashboard.component.html and add following lines:

<p>dependency-dashboard works!</p>
<div id="visNetwork" #visNetwork></div>

Your typescript code in its simplest form may look like this:

import { AfterViewInit, Component, ElementRef, OnInit, ViewChild } from '@angular/core';
import { DataSet } from 'vis-data';
import { Network } from 'vis-network';

@Component({
  selector: 'jhi-dependency-dashboard',
  templateUrl: './dependency-dashboard.component.html',
  styleUrls: ['./dependency-dashboard.component.scss'],
})
export class DependencyDashboardComponent implements OnInit, AfterViewInit {
  @ViewChild('visNetwork', { static: false }) visNetwork!: ElementRef;
  private networkInstance: any;

  constructor() {}

  ngOnInit(): void {}

  ngAfterViewInit(): void {
    // create an array with nodes
    const nodes = new DataSet<any>([
      { id: 1, label: 'Node 1' },
      { id: 2, label: 'Node 2' },
      { id: 3, label: 'Node 3' },
      { id: 4, label: 'Node 4' },
      { id: 5, label: 'Node 5' },
    ]);

    // create an array with edges
    const edges = new DataSet<any>([
      { from: '1', to: '3' },
      { from: '1', to: '2' },
      { from: '2', to: '4' },
      { from: '2', to: '5' },
    ]);

    const data = { nodes, edges };

    const container = this.visNetwork;
    this.networkInstance = new Network(container.nativeElement, data, {});
  }
}

If you are not familiar with ElementRef take a look here: https://www.techiediaries.com/angular-elementref

Looks good so far? Not really. If you try to run it nothing will happen. According to vis-network developers (see my ticket on GitHub):

This is expected behavior (it’s a documented part of the API) and a bug (it was never meant to happen)

To solve this you can add following imports into you TS-Code:

import { DataSet } from 'vis-data/peer';
import { Network } from 'vis-network/peer';

Next step is load some real data from you REST API:

@Component({
  selector: 'jhi-dependency-dashboard',
  templateUrl: './dependency-dashboard.component.html',
  styleUrls: ['./dependency-dashboard.component.scss'],
})
export class DependencyDashboardComponent implements AfterViewInit {
  @ViewChild('visNetwork', { static: false })
  visNetwork!: ElementRef;

  networkInstance: any;
  constructor(protected dependencyService: DependencyService) {}

  ngAfterViewInit(): void {
    const container = this.visNetwork;

    const data = {};
    this.networkInstance = new Network(container.nativeElement, data, {
      height: '100%',
      width: '100%',
      nodes: {
        shape: 'hexagon',
        font: {
          color: 'white',
        },
      },
      edges: {
        smooth: false,
        arrows: {
          to: {
            enabled: true,
            type: 'vee',
          },
        },
      },
    });

    this.loadAll();
  }

  loadAll(): void {
    this.dependencyService
      .query()
      .pipe(map(httpResponse => httpResponse.body))
      .subscribe(dependencies => this.refreshGraph(dependencies || []));
  }

  refreshGraph(dependencies: IDependency[]): void {
    const edges = new DataSet<any>();

    const microservicesMap = new Map<number, IMicroservice>();

    dependencies.forEach(d => {
      if (d.source != null && d.target != null) {
        microservicesMap.set(d.source.id!, d.source);
        microservicesMap.set(d.target.id!, d.target);

        edges.add({
          from: d.source.id,
          to: d.target.id,
        });
      }
    });

    const nodes = new DataSet<any>([...microservicesMap.values()].map(m => this.convertToGraphNode(m)));

    const data = { nodes, edges };

    this.networkInstance.setData(data);
  }

  convertToGraphNode(microservice: IMicroservice): any {
    return {
      id: microservice.id,
      label: microservice.name,
      title: microservice.description,
    };
  }
}

In my use-cases each microservice is represented by node and each dependency by edge. In other words if there is dependency between two microservices, then we show edge between two nodes. Since dependency can be bi-directional we need to set arrows.to.enabled in graph options.

Finally I wanted to have a possibility to filter out one particular microservice and display it’s dependencies. No problem, let’s improve our template and introduce few filters:

    <div class="row justify-content-start">
        <div class="col-4">
            <jhi-microservice-search class="p-3" (itemSelected)="onMicroserviceSelected($event)">
            </jhi-microservice-search>
        </div>
    </div>
    <div class="row justify-content-start">
        <div class="col-4">
            <div class="form-check" >
                <label class="form-check-label">
                    <input type="checkbox" [(ngModel)]="onlyIncomingFilter" (ngModelChange)="onFilterChange()">
                    <span jhiTranslate="microcatalogApp.dashboard.dependency.filter.incoming"></span>
                </label>
            </div>
            <div class="form-check">
                <label class="form-check-label">
                    <input type="checkbox" [(ngModel)]="onlyOutgoingFilter" (ngModelChange)="onFilterChange()">
                    <span jhiTranslate="microcatalogApp.dashboard.dependency.filter.outgoing"></span>
                </label>
            </div>
        </div>
    </div>

I’m reusing here existing microservice search component and catching itemSelected event:

export class DependencyDashboardComponent implements AfterViewInit {
  @ViewChild('visNetwork', { static: false })
  visNetwork!: ElementRef;

  networkInstance: any;
  searchValue?: IMicroservice;
  onlyIncomingFilter = true;
  onlyOutgoingFilter = true;

  constructor(protected dependencyService: DependencyService) {}

  ngAfterViewInit(): void {
    const container = this.visNetwork;

    const data = {};
    this.networkInstance = new Network(container.nativeElement, data, {
...
    });

    this.loadAll();
  }

  onFilterChange(): any {
    // Only makes sense to refresh if filter for particular microservice is active
    if (this.searchValue) {
      this.loadAll();
    }
  }

  loadAll(): void {
    this.dependencyService
      .query()
      .pipe(map(httpResponse => httpResponse.body))
      .subscribe(dependencies => this.refreshGraph(dependencies || []));
  }

  refreshGraph(dependencies: IDependency[]): void {
    const edges = new DataSet<any>();

    const microservicesMap = new Map<number, IMicroservice>();

    if (this.searchValue) {
      const searchID = this.searchValue.id;
      dependencies = dependencies.filter(d => {
        if (this.onlyIncomingFilter && !this.onlyOutgoingFilter) {
          return d.target?.id === searchID;
        }

        if (this.onlyOutgoingFilter && !this.onlyIncomingFilter) {
          return d.source?.id === searchID;
        }

        if (this.onlyIncomingFilter && this.onlyOutgoingFilter) {
          return d.source?.id === searchID || d.target?.id === searchID;
        }

        return false;
      });
    }

    dependencies.forEach(d => {
      if (d.source != null && d.target != null) {
        microservicesMap.set(d.source.id!, d.source);
        microservicesMap.set(d.target.id!, d.target);

        edges.add({
          from: d.source.id,
          to: d.target.id,
        });
      }
    });

    const nodes = new DataSet<any>([...microservicesMap.values()].map(m => this.convertToGraphNode(m)));

    const data = { nodes, edges };

    this.networkInstance.setData(data);
  }

  convertToGraphNode(microservice: IMicroservice): any {
    return {
      id: microservice.id,
      label: microservice.name,
      title: microservice.description,
    };
  }

  onMicroserviceSelected(microservice?: IMicroservice): any {
    this.searchValue = microservice;
    this.loadAll();
  }
}

That was it, now you can filter microservices, display incoming / outgoing dependencies.

Thank you for your time and next I’m going to post about multi-field filtering and advanced search as it is already implemented in microservice catalog:

You can try it live here https://microcatalog.herokuapp.com using “user”/”user” login / pass

If you’re interested in contributing, drop me a line or check existing issues: https://github.com/tillias/microservice-catalog/issues

Microservice Catalog

Recently I was reading few articles related to how organizations adapt microservices. The topic I was particular interested in was code duplication between microservices. There is of course SonarQube, but what to do if there are several teams sitting in different departments with little or no coordination between them? How to avoid the situation when two or more teams develop the same functionality, maybe even using different tech stacks? It seems I’m not alone at least.

Then I remembered an old video related to how Netflix or Spotify solve this problem. I do remember I have seen it at least 3 times quite a few years ago, and the solution was Microservice Catalog.

Disclaimer: I can’t find this video anymore, so ping me please if you still have it bookmarked somewhere.

The basic idea is to have a simple catalog, displaying all you microservices in a one place:

You can try it live here: https://microcatalog.herokuapp.com with “user” / “user”

I’m using free tier on heroku, so if app was not used for some time, it will be in standby mode. Give heroku few minutes to start the application

Of course there are such a powerful tools like AppDynamics but what about something really simple? Are there any open source alternatives? I have tried to find some, but with no luck.

So meet my new project on github: https://github.com/tillias/microservice-catalog. It uses Masonry layout at the moment to align cards. You can save name, team, description, custom image url and few other fields.

There is docker image available, so give it a try if interested. Code is licensed under MIT so you can use it commercially for free as well. There are few things I’m going to improve:

  1. Visualize dependencies between microservices. I have had quite a good experience with https://github.com/visjs/vis-network
  2. Some security improvements described in https://github.com/tillias/microservice-catalog/issues/1
  3. Visual tags for microservices, e.g. “released”, “under development” etc -> implemented
  4. Other features as listed in https://github.com/tillias/microservice-catalog/issues

Please drop me a line if you’re interested in contributing. It will be fun!

Modern E2E Tests with Cypress

“Success is no accident. It is hard work, perseverance, learning, studying, sacrifice and most of all, love of what you are doing or learning to do.”

Pele

Automatic End-To-End Integration tests? Fully integrated into build pipeline? Testing Front-end? Something crazy like this:

And available even for open source projects?

Looks like a crazy idea? Actually not. In this article I’d like to tell you, how I moved from this:

To this:

I was recently making last preparations for the first release of my tillias/microservice-catalog open source project, when I decided to include some smoke tests. “What about Selenium Grid somewhere in the Cloud?” was my first idea. Luckily this is my personal project and I can experiment with it. So I decided to check alternatives and quite fast I have found Cypress.

Long story short — it is absolutely fantastic tool. You can start writing your first productive tests within a minutes without dancing with a wolves Selenium :)

Installation is pretty simple:

npm install cypress --save-dev

Next go to cypress folder and create tsconfig.json

{
  "composite": true,
  "compilerOptions": {
    "target": "es5",
    "lib": ["es5", "dom"],
    "types": ["cypress"]
  },
  "include": [
    "**/*.ts"
  ]
}

We love Type Script, don’t we?

Next is test, I call it integration/dependencies-dashboard.ts cause I’m testing here if I can access my dependency dashboard:

describe('microservice-catalog e2e', () => {
  it('check dependency dashboard selection', function () {
    cy.visit('/');
    cy.get('.alert:nth-child(1) > .alert-link').click();
    cy.get('#username').type('user');
    cy.get('#password').type('user');
    cy.get('.btn').click();
    cy.get('.form').submit();
    cy.get('#dashboard > span > span').click();

    // wait till dependency-dashboard will load it's data
    cy.get('ngx-spinner').should('not.be.visible');

    cy.get('canvas').click();
    cy.get('canvas').click(249, 320, {force: true});
    cy.get('jhi-vertex-legend a').should('have.text', 'Service 1');
    cy.get('canvas').click(209, 135, {force: true});
    cy.get('jhi-vertex-legend a').should('have.text', 'Service 4');
  })
})

You can actually use Cypress Recorder Plugin for Chrome to automate majority of your cases. This is a huge win, if you ever automated something with selenium!

Run your test:

cypress run

Any validate that run was successful:

You have probably noticed dependencies-dashboard.ts.mp4 on the screenshot above. Yes, you’re right, this tool can even record videos from test runs

Now we need some environment and test data. I decided to use my heroku deployment and sample test data provided by liquibase. Here is my application-heroku.yml

spring:
  # we're running on a free tier with automatic suspension after inactivity, thus having fresh data for demo is important
  liquibase:
    drop-first: true
    contexts: prod, faker

Pay attention to faker context. It allows me to insert some fake data:

    <changeSet id="20201001051657-1-data" author="jhipster" context="faker">
        <loadData
                  file="config/liquibase/fake-data/microservice.csv"
                  separator=";"
                  tableName="microservice">
            <column name="id" type="numeric"/>
            <column name="name" type="string"/>
            <column name="description" type="clob"/>
            <column name="image_url" type="string"/>
            <column name="swagger_url" type="string"/>
            <column name="git_url" type="string"/>
            <column name="ci_url" type="string"/>
            <column name="team_id" type="numeric"/>
            <column name="status_id" type="numeric"/>
            <!-- jhipster-needle-liquibase-add-loadcolumn - JHipster (and/or extensions) can add load columns here -->
        </loadData>
    </changeSet>

Here is example csv:

id;name;description;image_url;swagger_url;git_url;ci_url;team_id;status_id
1;Service 1;../fake-data/blob/hipster.txt;rich Prairie olive;Handcrafted program hard drive;lime Denmark;Kenya;1;1
2;Service 2;../fake-data/blob/hipster.txt;infrastructure;Reactive;interface bifurcated Forint;teal;2;2
3;Service 3;../fake-data/blob/hipster.txt;pixel Regional Assimilated;Home Loan Account;partnerships Wisconsin;exploit well-modulated;1;3
....

I have added few more for other entities I need: https://github.com/tillias/microservice-catalog/commit/3f6c7b174226272d9c03b8af4aef1b0c3ddea170 as well as fixed seed for vis-network, so dependencies (graph) will have the same layout between test runs. I have made it configurable for heroku deployment using webpack.prod.js:

module.exports = webpackMerge(commonConfig({env: ENV}), {
...
  plugins: [
    new webpack.DefinePlugin({
      'process.env': {
        'EXPERIMENTAL_FEATURE': false,
        'GRAPH_FIXED_SEED': true
      }
    }),

And here is my service:

@Injectable({
  providedIn: 'root',
})
export class VisNetworkService {
  fixedSeed = GRAPH_FIXED_SEED;
  seed?: number;

  constructor() {
    this.seed = undefined; // use random by default
  }

  createNetwork(element: ElementRef, options?: Options): Network {
    if (this.fixedSeed) {
      this.seed = 7;
    }

    const defaultOptions = {
      height: '100%',
      width: '100%',
      nodes: {
        shape: 'hexagon',
        font: {
          color: 'white',
        },
      },
      clickToUse: false,
      edges: {
        smooth: false,
        arrows: {
          to: {
            enabled: true,
            type: 'vee',
          },
        },
      },
      interaction: {
        multiselect: true,
      },
      layout: {
        randomSeed: this.seed,
      },
    };

    return new Network(element.nativeElement, {}, options || defaultOptions);
  }
}

Now we have test and data. Let’s finish with environment. I have recently migrated from travis-ci to circle-ci, because I’m ill of long waiting queues… Moreover Circle CI has “Continuous Integration for Open Source (OSS)” program:

We support the open source community. Organizations on our free plan get 400,000 credits per month for open source builds.

https://circleci.com/open-source/

Thank you guys! You rock ’nuff said.

Here is part of my .circleci/config.yaml :

  cypress:
    docker:
      - image: cypress/browsers:node12.14.1-chrome83-ff77
    steps:
      - checkout
      - restore_cache:
          keys:
            - v2-cypress-dependencies-{{ checksum "package-lock.json" }}
            # Perform a Partial Cache Restore (https://circleci.com/docs/2.0/caching/#restoring-cache)
            - v2-cypress-dependencies-
      - run:
          name: Install Node Modules
          command: 'npm install'
      - save_cache:
          paths:
            - node
            - node_modules
            - /root/.cache/Cypress
          key: v2-cypress-dependencies-{{ checksum "package-lock.json" }}
      - run:
          name: cypress run
          command: 'npm run cypress'
workflows:
  build:
    jobs:
      - build
      - cypress:
          requires:
            - build
          filters:
            branches:
              only: master

Cypress provides pre-build docker images for circle-ci, so it was pretty straightforward task to integrate second cypress job into build workflow. Here is final result:

Alternatively you can check pipleline yourself: https://app.circleci.com/pipelines/github/tillias/microservice-catalog/86/workflows/1d8d699c-37d4-428a-a2f7-86728f86bd01

Next week(s) I’m going to finish remaining e2e tests, prepare release, publish final docker image and post about latest features already implemented, such as impact analysis:

I hope you enjoyed the stuff and have a nice week! Stay positive and healthy during these difficult times.

Feel free to ping me on LinkedIn or twitter in case if you have any suggestions.

Building shortest path for microservices with SpringBoot, Angular 10 and JGraphT

I’m pleased to announce that my new tillias/microservice-catalog can now build release path for particular microservice. There is both UI available as well as REST API. Implementation though is not as straightforward as one may think. In this blog post I’d like to talk about graph algorithms, some mathematics and of course I will show you the code.

I have chosen release path name intentionally, because it is not release plan! Microservices must be independently deployable. You can use it for example in order to orchestrate of your automatic integration tests, to get more insight about your dependencies and of course to create awesome Power Point presentations if this is what you like have to do ;-)

In addition to UI there is REST API available. You can try it yourself by downloading and starting docker-compose.yml. Then you can issue following GET request:

curl --location --request GET 'http://localhost:8080/api/release-path/microservice/1' \
--header 'Authorization: Bearer eyJhbGciOiJIUzUxMiJ9.eyJzdWIiOiJhZG1pbiIsImF1dGgiOiJST0xFX0FETUlOLFJPTEVfVVNFUiIsImV4cCI6MTYwNDA1Mjk0M30.C0LvtPNvfOUy7pu2eOhZLzMICyYP4tDkfpD5haqKWJGitc3PUkQI5Y1JSGG1tCCgX4p7Gd8kQqENC1y6_5gCqA'

If you’re not familiar with JWT and Bearer, please refer to this article.

Response structure is pretty simple:

ReleasePath is built for particular target (Microservice). ReleasePath consists of groups. Each ReleaseGroup has steps which can be triggered parallel (e.g. build, integration tests etc). Each ReleaseStep contains workItem and its parents.

Now the interesting part, how you can actually create such a structure?

“Talk is cheap. Show me the code.”

― Linus Torvalds

Let us start with the basics and introduce GraphLoaderService. We’re using JGraphT here:

import org.jgrapht.Graph;
import org.jgrapht.graph.DefaultDirectedGraph;
import org.jgrapht.graph.DefaultEdge;

@Service
@Transactional
public class GraphLoaderService {

    private final MicroserviceRepository microserviceRepository;
    private final DependencyRepository dependencyRepository;

    public GraphLoaderService(MicroserviceRepository microserviceRepository, DependencyRepository dependencyRepository) {
        this.microserviceRepository = microserviceRepository;
        this.dependencyRepository = dependencyRepository;
    }

    public Graph<Microservice, DefaultEdge> loadGraph() {
        List<Microservice> microservices = microserviceRepository.findAll();
        List<Dependency> dependencies = dependencyRepository.findAll();

        return createGraph(microservices, dependencies);
    }

    private Graph<Microservice, DefaultEdge> createGraph(final List<Microservice> nodes, final List<Dependency> edges) {
        final Graph<Microservice, DefaultEdge> graph = new DefaultDirectedGraph<>(DefaultEdge.class);

        nodes.forEach(graph::addVertex);
        edges.forEach(d -> graph.addEdge(d.getSource(), d.getTarget()));

        return graph;
    }
}

As you may remember from one of my previous posts we have Microservices and Dependencies. Each Dependency has source and target of type Microservice. This is perfect input data for directed graph.

For simplicity I will be using microservices ids as a “node”:

Let’s say we want to calculate release path for first (1) microservice.

In graph theory, a component, sometimes called a connected component, of an undirected graph is a subgraph in which any two vertices are connected to each other by paths, and which is connected to no additional vertices in the supergraph

https://en.wikipedia.org/wiki/Component_(graph_theory)

So first of all we need to calculate connected component, which contains (1):

public class ReleasePathCustomService {

    private final Logger log = LoggerFactory.getLogger(ReleasePathCustomService.class);

    private final GraphLoaderService graphLoaderService;

    public ReleasePathCustomService(GraphLoaderService graphLoaderService) {
        this.graphLoaderService = graphLoaderService;
    }

    public Optional<ReleasePath> getReleasePath(final Long microserviceId) {
        final Graph<Microservice, DefaultEdge> graph = graphLoaderService.loadGraph();

        final Optional<Microservice> maybeTarget = graph.vertexSet()
            .stream().filter(v -> Objects.equals(v.getId(), microserviceId)).findFirst();

        // can't build release path, cause microservice with given id is not present in graph
        if (!maybeTarget.isPresent()) {
            return Optional.empty();
        }

        final Microservice target = maybeTarget.get();

        final ConnectivityInspector<Microservice, DefaultEdge> inspector = new ConnectivityInspector<>(graph);
        final Set<Microservice> connectedSet = inspector.connectedSetOf(target);
        // Connected subgraph, that contains target microservice
        final AsSubgraph<Microservice, DefaultEdge> targetSubgraph = new AsSubgraph<>(graph, connectedSet);
        log.debug("Connected subgraph, that contains target microservice: {}", targetSubgraph);

        return new ReleasePath();
}

From the code above targetSubgraph will looke like this:

Now it is getting interesting. To get all dependencies for (1) we need to iterate through graph. I will be using Depth-first search starting from (1).

Depth-first search is an algorithm for traversing or searching tree or graph data structures. The algorithm starts at the root node and explores as far as possible along each branch before backtracking. 

https://en.wikipedia.org/wiki/Depth-first_search

We will remember all microservices (vertices) in a set. In order to avoid cycles we will first check graph for cycles and throw exception if any:

// Connected subgraph, that contains target microservice
        final AsSubgraph<Microservice, DefaultEdge> targetSubgraph = new AsSubgraph<>(graph, connectedSet);
        log.debug("Connected subgraph, that contains target microservice: {}", targetSubgraph);

        final CycleDetector<Microservice, DefaultEdge> cycleDetector = new CycleDetector<>(targetSubgraph);
        if (cycleDetector.detectCycles()) {
            final Set<Microservice> cycles = cycleDetector.findCycles();
            throw new IllegalArgumentException(String.format("There are cyclic dependencies between microservices : %s", cycles));
        }

        final Set<Microservice> pathMicroservices = new HashSet<>();
        GraphIterator<Microservice, DefaultEdge> iterator = new DepthFirstIterator<>(targetSubgraph, target);
        while (iterator.hasNext()) {
            pathMicroservices.add(iterator.next());
        }
        final Graph<Microservice, DefaultEdge> pathGraph = new AsSubgraph<>(targetSubgraph, pathMicroservices);
        log.debug("Connected subgraph, which contains all paths from target microservice to it's dependencies {}", pathGraph);

Set pathMicroservices will contain (1,2,4,5,7,8):

You can’t get from 5 -> 10 or from 4 -> 6. That is why pathGraph will be:

Now lets transpose our pathGraph:

In the mathematical and algorithmic study of graph theory, the converse, transpose or reverse of a directed graph G is another directed graph on the same set of vertices with all of the edges reversed compared to the orientation of the corresponding edges in G.

https://en.wikipedia.org/wiki/Transpose_graph
 final Graph<Microservice, DefaultEdge> reversed = new EdgeReversedGraph<>(pathGraph);

We’re ready constructing our first ReleaseGroup. As you may notice (5) and (8) don’t have any incoming edges. We can release them first (perhaps even parallel) and delete from graph:

Next we can release (7) as a part of next ReleaseGroup and delete it from graph:

You get it right, we release (4), then (2) and finally (1).

Here is the code:

private ReleasePath convert(final Graph<Microservice, DefaultEdge> graph, final Microservice target) {
        final ReleasePath result = new ReleasePath();
        result.setCreatedOn(Instant.now());
        result.setTarget(target);

        final List<ReleaseGroup> groups = new ArrayList<>();

        do {
            final List<Microservice> verticesWithoutIncomingEdges = graph.vertexSet().stream()
                .filter(v -> graph.incomingEdgesOf(v).isEmpty())
                .collect(Collectors.toList());
            log.debug("Leaves: {}", verticesWithoutIncomingEdges);

            final ReleaseGroup group = new ReleaseGroup();
            group.setSteps(convertSteps(verticesWithoutIncomingEdges, graph));
            groups.add(group);

            verticesWithoutIncomingEdges.forEach(graph::removeVertex);
        } while (!graph.vertexSet().isEmpty());

        result.setGroups(groups);

        return result;
    }

    private Set<ReleaseStep> convertSteps(final List<Microservice> verticesWithoutIncomingEdges,
                                          final Graph<Microservice, DefaultEdge> graph) {
        final Set<ReleaseStep> result = new HashSet<>();

        verticesWithoutIncomingEdges.forEach(microservice -> {
            final List<Microservice> parentWorkItems = new ArrayList<>();

            final Set<DefaultEdge> outgoingEdges = graph.outgoingEdgesOf(microservice);
            for (DefaultEdge e : outgoingEdges) {
                final Microservice edgeTarget = graph.getEdgeTarget(e);
                parentWorkItems.add(edgeTarget);
            }

            result.add(
                new ReleaseStep()
                    .workItem(microservice)
                    .parentWorkItems(parentWorkItems)
            );
        });

        return result;
    }

We pass reversed graph into convert() method. As a result we will get following ReleasePath:

Let’s add some unit tests:

@SpringBootTest(classes = {ReleasePathCustomService.class})
class ReleasePathCustomServiceTest {

    @MockBean
    private GraphLoaderService graphLoaderService;

    @Autowired
    private ReleasePathCustomService service;

    @Test
    void getReleasePath_NodeOutsideGraph_EmptyPath() {
        given(graphLoaderService.loadGraph())
            .willReturn(
                GraphUtils.createGraph(String.join("\n",
                    "strict digraph G { ",
                    "1; 2; 3;",
                    "1 -> 2;",
                    "2 -> 3;}"
                    )
                )
            );

        Optional<ReleasePath> maybePath = service.getReleasePath(4L);
        assertThat(maybePath).isEmpty();
    }

    @Test
    void getReleasePath_NoCycles_Success() {
        given(graphLoaderService.loadGraph())
            .willReturn(
                GraphUtils.createGraph(String.join("\n",
                    "strict digraph G { ",
                    "1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12;",
                    "1 -> 2;",
                    "2 -> 4;",
                    "6 -> 4;",
                    "4 -> 5;",
                    "4 -> 7;",
                    "4 -> 8;",
                    "3 -> 9;",
                    "3 -> 11;",
                    "12 -> 1;",
                    "7 -> 5;",
                    "10 -> 1;}"
                    )
                )
            );

        Optional<ReleasePath> maybePath = service.getReleasePath(1L);
        assertThat(maybePath).isPresent();

        ReleasePath path = maybePath.get();

        ReleaseStep step = getStep(path, 0, 5L);
        assertThat(step).isNotNull();
        assertThat(step.getParentWorkItems()).extracting(Microservice::getId).containsExactlyInAnyOrder(4L, 7L);

        step = getStep(path, 0, 8L);
        assertThat(step).isNotNull();
        assertThat(step.getParentWorkItems()).extracting(Microservice::getId).containsExactlyInAnyOrder(4L);

        step = getStep(path, 1, 7L);
        assertThat(step).isNotNull();
        assertThat(step.getParentWorkItems()).extracting(Microservice::getId).containsExactlyInAnyOrder(4L);

        step = getStep(path, 2, 4L);
        assertThat(step).isNotNull();
        assertThat(step.getParentWorkItems()).extracting(Microservice::getId).containsExactlyInAnyOrder(2L);

        step = getStep(path, 3, 2L);
        assertThat(step).isNotNull();
        assertThat(step.getParentWorkItems()).extracting(Microservice::getId).containsExactlyInAnyOrder(1L);

        step = getStep(path, 4, 1L);
        assertThat(step).isNotNull();
        assertThat(step.getParentWorkItems()).isEmpty();
    }

    @Test
    void getReleasePath_ContainsCyclesInSameComponent_ExceptionIsThrown() {
        given(graphLoaderService.loadGraph())
            .willReturn(
                GraphUtils.createGraph(String.join("\n",
                    "strict digraph G { ",
                    "1; 2; 3; 5; 6; 7;",
                    "1 -> 2;",
                    "2 -> 3;",
                    "3 -> 1;",
                    "5 -> 6;",
                    "5 -> 7;}"
                    )
                )
            );

        assertThatIllegalArgumentException()
            .isThrownBy(() ->
                service.getReleasePath(1L)
            )
            .withMessageStartingWith("There are cyclic dependencies between microservices");
    }

    @Test
    void getReleasePath_ContainsCyclesInOtherComponent_Success() {
        given(graphLoaderService.loadGraph())
            .willReturn(
                GraphUtils.createGraph(String.join("\n",
                    "strict digraph G { ",
                    "1; 2; 3; 4; 5; 6; 7; 8;",
                    "1 -> 2;",
                    "2 -> 3;",
                    "2 -> 4;",
                    "5 -> 6;",
                    "6 -> 7;",
                    "7 -> 8;",
                    "8 -> 6;}"
                    )
                )
            );

        Optional<ReleasePath> maybePath = service.getReleasePath(1L);
        assertThat(maybePath).isPresent();

        ReleasePath path = maybePath.get();

        ReleaseStep step = getStep(path, 0, 3);
        assertThat(step).isNotNull();
        assertThat(step.getParentWorkItems()).extracting(Microservice::getId).containsExactlyInAnyOrder(2L);

        step = getStep(path, 0, 4);
        assertThat(step).isNotNull();
        assertThat(step.getParentWorkItems()).extracting(Microservice::getId).containsExactlyInAnyOrder(2L);

        step = getStep(path, 1, 2);
        assertThat(step).isNotNull();
        assertThat(step.getParentWorkItems()).extracting(Microservice::getId).containsExactlyInAnyOrder(1L);

        step = getStep(path, 2, 1L);
        assertThat(step).isNotNull();
        assertThat(step.getParentWorkItems()).isEmpty();
    }

    private ReleaseStep getStep(final ReleasePath path, int groupIndex, long microserviceId) {
        final Set<ReleaseStep> steps = getSteps(path, groupIndex);
        if (steps != null) {
            Optional<ReleaseStep> maybeStep = steps.stream()
                .filter(s -> s.getWorkItem() != null && microserviceId == s.getWorkItem().getId()).findFirst();
            if (maybeStep.isPresent()) {
                return maybeStep.get();
            }
        }

        return null;
    }

    private Set<ReleaseStep> getSteps(ReleasePath path, int groupIndex) {
        if (path.getGroups() == null) {
            return null;
        }

        if (groupIndex > path.getGroups().size()) {
            return null;
        }

        ReleaseGroup first = path.getGroups().get(groupIndex);
        if (first != null) {
            return first.getSteps();
        } else {
            return Collections.emptySet();
        }
    }
}

And we’re done with the back-end:

For fronend I will start with separate dashboard component (release-path-dashboard.component.html):

<div class="row">
    <div class="col-4">
        <jhi-release-path></jhi-release-path>
    </div>
    <div class="col-8">
        <jhi-release-graph></jhi-release-graph>
    </div>
</div>

Here is jhi-release-path. Teamplate and then component code:

<div class="p3 vis-network-full-height" #visNetwork></div>

@Component({
  selector: 'jhi-release-path',
  templateUrl: './release-path.component.html',
  styleUrls: ['./release-path.component.scss'],
})
export class ReleasePathComponent implements OnInit, AfterViewInit {
  @ViewChild('visNetwork', { static: false })
  visNetwork!: ElementRef;
  networkInstance: any;

  releasePath?: IReleasePath;

  constructor(
    protected activatedRoute: ActivatedRoute,
    protected visNetworkService: VisNetworkService,
    protected nodeColorsService: NodeColorsService
  ) {}

  ngOnInit(): void {
    this.activatedRoute.data.subscribe(({ releasePath }) => {
      this.releasePath = releasePath;
    });
  }

  ngAfterViewInit(): void {
    const container = this.visNetwork;

    const options = {
      height: '100%',
      width: '100%',
      nodes: {
        shape: 'box',
        font: {
          color: 'white',
        },
      },
      clickToUse: false,
      edges: {
        smooth: false,
        arrows: {
          to: {
            enabled: true,
            type: 'vee',
          },
        },
      },
      interaction: {
        multiselect: true,
      },
      layout: {
        hierarchical: {},
      },
    };

    this.networkInstance = this.visNetworkService.createNetwork(container, options);

    const nodes = new DataSet<any>();
    const edges = new DataSet<any>();
    const targetId = this.releasePath?.target?.id;

    if (this.releasePath && this.releasePath.groups) {
      let groupIndex = 0;
      const groups = this.releasePath.groups;
      const groupsCount = groups.length;
      this.releasePath.groups?.forEach(g => {
        let nodeColor = this.nodeColorsService.getColor(groupIndex);
        if (this.containsTargetMicroservice(g, targetId)) {
          nodeColor = this.nodeColorsService.getActiveColor();
        }

        nodes.add({
          id: groupIndex,
          label: this.generateLabel(g, groupIndex),
          color: nodeColor,
        });

        if (groupIndex < groupsCount - 1) {
          edges.add({
            from: groupIndex,
            to: groupIndex + 1,
          });
        }

        ++groupIndex;
      });
    }

    this.networkInstance.setData({ nodes, edges });
  }

  /**
   * Checks if given IReleaseGroup contains target microservice, for which release path is built
   * @param group with workItems
   * @param targetId of target microservice, for which release path is built
   */
  private containsTargetMicroservice(group: IReleaseGroup, targetId?: number): boolean {
    const step = group.steps?.filter(s => s.workItem?.id === targetId).pop();
    if (step) {
      return true;
    } else {
      return false;
    }
  }

  private generateLabel(group: IReleaseGroup, groupIndex: number): string {
    let label = groupIndex + 1 + '. ';

    group.steps?.forEach(s => {
      label = label + '[' + s.workItem?.name + '], ';
    });

    label = label.substr(0, label.length - 2);

    return label;
  }
}

Here is jhi-release-graph:

<div class="p3 vis-network-full-height" #visNetwork></div>

import { AfterViewInit, Component, ElementRef, OnInit, ViewChild } from '@angular/core';
import { ActivatedRoute } from '@angular/router';
import { VisNetworkService } from '../../../shared/vis/vis-network.service';
import { NodeColorsService } from '../node-colors.service';
import { IReleasePath } from '../../../shared/model/release-path.model';
import { DataSet } from 'vis-data/peer';

@Component({
  selector: 'jhi-release-graph',
  templateUrl: './release-graph.component.html',
  styleUrls: ['./release-graph.component.scss'],
})
export class ReleaseGraphComponent implements OnInit, AfterViewInit {
  @ViewChild('visNetwork', { static: false })
  visNetwork!: ElementRef;
  networkInstance: any;

  releasePath?: IReleasePath;

  constructor(
    protected activatedRoute: ActivatedRoute,
    protected visNetworkService: VisNetworkService,
    protected nodeColorsService: NodeColorsService
  ) {}

  ngOnInit(): void {
    this.activatedRoute.data.subscribe(({ releasePath }) => {
      this.releasePath = releasePath;
    });
  }

  ngAfterViewInit(): void {
    const container = this.visNetwork;

    this.networkInstance = this.visNetworkService.createNetwork(container);

    const nodes = new DataSet<any>();
    const edges = new DataSet<any>();
    const targetId = this.releasePath?.target?.id;

    if (this.releasePath) {
      let groupIndex = 0;
      this.releasePath.groups?.forEach(g => {
        g.steps?.forEach(s => {
          const workItemId = s.workItem?.id;

          let nodeColor = this.nodeColorsService.getColor(groupIndex);
          if (workItemId === targetId) {
            nodeColor = this.nodeColorsService.getActiveColor();
          }

          nodes.add({
            id: workItemId,
            label: s.workItem?.name,
            color: nodeColor,
          });

          s.parentWorkItems?.forEach(pw => {
            edges.add({
              from: workItemId,
              to: pw.id,
            });
          });
        });

        ++groupIndex;
      });
    }

    this.networkInstance.setData({ nodes, edges });
  }
}

That was it. Thank you for your time and I hope you enjoyed it!

Of course there are much much more features ready in tillias/microservice-catalog. You can now create dependencies and microservices visually from the dashboard, you can delete the stuff, there are numerous graph validations implemented, such as:

  • Duplicated dependencies validation
  • Edges which will otherwise introduce circular dependencies are not allowed
  • Edge having same source and target

For complete list of implemented features take a look into Version 1.0

I’m almost feature ready for first 1.0.0 GA and planning to release it in November 2020. Then I would like to concentrate on enterprise grade features, like Domain Driven Design (modelling aggregates, bounded contexts etc) or/and integration with Jenkins Pipelines.

Feel free to ping me on LinkedIn or twitter. I have now official gitter chat dedicated to microservice catalog, so don’t hesitate asking any questions or making proposals.

You can find all code from this article available on GitHub as well: tillias/microservice-catalog and try it either using docker images or in cloud.

RestTemplate and Proxy Authentication explained

Some time ago I’ve faced with requirements regarding retrieving data using REST behind proxy. There were some code samples available but I was really worried about thread safety and leaking resources. In this post I would like to dig into this topic using RestTemplate from Spring Framework

Read the rest of this entry »

PrimeFaces DataTable and server side pagination, filtering and sorting

Recently I’ve tried PrimeFaces 4x DataTable once again (after year or so working with Apache Wicket). One thing — I like it. It is now possible to implement scallable server side pagination, filtering and sorting logic without too much hacks

Read the rest of this entry »

Hibernate Tools and Custom DelegatingReverseEngineeringStrategy

In this post you will learn how to install Hibernate Tools (part of the JBoss Tools Project) from scratch and generate annotated POJOs for your database schema.
You will also learn how to create custom DelegatingReverseEngineeringStrategy implementation in order to tune your entities generation process

Read the rest of this entry »

Setting up Eclipse Kepler for JavaEE development

Fresh version of Eclipse codename Kepler ships without any plugins / addons installed. In this post we will learn how to prepare Kepler for JavaEE development using Tomcat, DTP, Maven and Subversive

Read the rest of this entry »