<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Anthony Honstain - Dev Notes]]></title><description><![CDATA[Technical blog and walkthroughs.]]></description><link>https://honstain.com/</link><generator>Ghost 5.79</generator><lastBuildDate>Sat, 04 Apr 2026 08:02:57 GMT</lastBuildDate><atom:link href="https://honstain.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Django Service with Gunicorn]]></title><description><![CDATA[Get a Django 5 service running in IntelliJ with mypy, black, and poetry. ]]></description><link>https://honstain.com/django-service-with-gunicorn/</link><guid isPermaLink="false">65ddf98ae4a4d7087b5e7ebb</guid><category><![CDATA[PostgreSQL]]></category><category><![CDATA[Django]]></category><category><![CDATA[Python]]></category><category><![CDATA[Gunicorn]]></category><category><![CDATA[IntelliJ]]></category><dc:creator><![CDATA[Anthony Honstain]]></dc:creator><pubDate>Sat, 16 Mar 2024 15:18:50 GMT</pubDate><media:content url="https://honstain.com/content/images/2024/03/Screenshot-2024-03-15-071034.png" medium="image"/><content:encoded><![CDATA[<img src="https://honstain.com/content/images/2024/03/Screenshot-2024-03-15-071034.png" alt="Django Service with Gunicorn"><p>This will be a quick walkthrough of standing up basic Django with some fancy development tools like IntelliJ, mypy, and black. In addition to Django we will use DRF to implement a REST API and Gunicorn to expand on the basic Django runserver. </p><p>This guide acts as a precursor, setting the stage for future posts where this Django service will be utilized as a dependency of the SQS consumer we created in this guide <a href="https://honstain.com/asyncio-sqs-and-httpx/">https://honstain.com/asyncio-sqs-and-httpx/</a>.</p><p><strong>Why would you be interested in this write-up?</strong></p><ul><li>You&apos;re looking for some practical examples of setting Django with IntelliJ.</li><li>You&apos;re interested in Gunicorn configurations for Django services.<ul><li>This will give us some basic tooling to consider performance in future blog posts.</li></ul></li></ul><p><strong>What this post is not:</strong></p><ul><li>A replacement for the excellent Django tutorial <a href="https://docs.djangoproject.com/en/5.0/intro/?ref=honstain.com">https://docs.djangoproject.com/en/5.0/intro/</a>. We assume you have gone through the tutorial or can wing it.</li><li>Security or CI/CD - we are primarily focused on getting a service setup for experimenting with development tools and eventually scaling/performance issue (subsequent posts).</li></ul><div class="kg-card kg-callout-card kg-callout-card-blue"><div class="kg-callout-emoji">&#x1F4A1;</div><div class="kg-callout-text">Source code for the django service discussed in this post:&#xA0;<a href="https://github.com/AnthonyHonstain/django-user-service?ref=honstain.com">https://github.com/AnthonyHonstain/django-user-service</a></div></div><h2 id="overview-of-key-dependencies">Overview of Key Dependencies </h2><p>The dependencies most relevant to this blog:</p><table>
<thead>
<tr>
<th>Dependency</th>
<th>Notes</th>
</tr>
</thead>
<tbody>
<tr>
<td>gunicorn 21.2.0</td>
<td>This is the thing subsequent posts will look at the most. <a href="https://gunicorn.org/?ref=honstain.com">https://gunicorn.org/</a></td>
</tr>
<tr>
<td>djangorestframework 3.14.0</td>
<td>The only real logic we create for this service will be a DRF endpoint and a Django model. This version doesn&apos;t officially support Django 5 but I wanted to try it anyway. <a href="https://www.django-rest-framework.org/?ref=honstain.com#">https://www.django-rest-framework.org/#</a></td>
</tr>
<tr>
<td>django 5.0.2</td>
<td>Giving the new version 5 a test drive. <a href="https://www.djangoproject.com/?ref=honstain.com">https://www.djangoproject.com/</a></td>
</tr>
<tr>
<td>psycopg2-binary 2.9.9</td>
<td>Needed for Postgres <a href="https://github.com/psycopg/psycopg2?ref=honstain.com">https://github.com/psycopg/psycopg2</a></td>
</tr>
<tr>
<td>postgres:16.2</td>
<td>The PostgreSQL docker image <a href="https://hub.docker.com/_/postgres?ref=honstain.com">https://hub.docker.com/_/postgres</a></td>
</tr>
<tr>
<td>black 24.2.0</td>
<td>I have been getting a ton of value here, ymmv. I love that it consistently and with no effort on my part produces very readable and organized formatting. <a href="https://black.readthedocs.io/en/stable/?ref=honstain.com">https://black.readthedocs.io/en/stable/</a></td>
</tr>
<tr>
<td>mypy 1.8.0</td>
<td>Static type checker - I still bump into odd things, but have found the juice worth the squeeze. <a href="https://mypy.readthedocs.io/en/stable/?ref=honstain.com">https://mypy.readthedocs.io/en/stable/</a></td>
</tr>
<tr>
<td>poetry</td>
<td>Using poetry in place of pip, this feels more natural for production systems.</td>
</tr>
</tbody>
</table>
<h1 id="setup-the-django-service">Setup The Django Service</h1><p>The idea here is to take the source code and get it running locally.</p><h2 id="get-the-source-code">Get the Source Code</h2><p>I have a public github repo you can pull or copy to get started.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/AnthonyHonstain/django-user-service?ref=honstain.com"><div class="kg-bookmark-content"><div class="kg-bookmark-title">GitHub - AnthonyHonstain/django-user-service: An example Django 5.0 service with DRF, postgres, gunicorn.</div><div class="kg-bookmark-description">An example Django 5.0 service with DRF, postgres, gunicorn. - AnthonyHonstain/django-user-service</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.githubassets.com/assets/pinned-octocat-093da3e6fa40.svg" alt="Django Service with Gunicorn"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">AnthonyHonstain</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/304d3737c475e70ab218dc0b17d559156e075deffc62be48b058de916231d6c9/AnthonyHonstain/django-user-service" alt="Django Service with Gunicorn"></div></a></figure><div class="kg-card kg-callout-card kg-callout-card-red"><div class="kg-callout-emoji">&#x26A0;&#xFE0F;</div><div class="kg-callout-text">This post does not contain enough information to take you through building a Django service from scratch. Please consider the Django tutorial if your looking for that sort of experience.</div></div><h2 id="get-mamba-or-decide-on-an-alternative">Get Mamba or Decide on an Alternative</h2><p>Similar to the previous <a href="https://honstain.com/asyncio-sqs-and-httpx/" rel="noreferrer">SQS consumer post</a> we are going to assume you are using mamba.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://mamba.readthedocs.io/en/latest/user_guide/micromamba.html?ref=honstain.com"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Micromamba User Guide &#x2014; documentation</div><div class="kg-bookmark-description"></div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://mamba.readthedocs.io/favicon.ico" alt="Django Service with Gunicorn"><span class="kg-bookmark-author">document.write(`&lt;img src=&quot;../_static/logo.png&quot; class=&quot;logo__image only-dark&quot; alt=&quot;documentation - Home&quot;/&gt;`);</span><span class="kg-bookmark-publisher">QuantStack &amp; mamba contributors</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://mamba.readthedocs.io/en/latest/_static/logo.png" alt="Django Service with Gunicorn"></div></a></figure><pre><code class="language-bash"># CD into the project directory
mamba create -n django-user-service -c conda-forge  python=3.12
mamba activate django-user-service
</code></pre><h2 id="pyprojecttoml-and-poetry">pyproject.toml and Poetry</h2><p>We will be using poetry to manage the python dependencies for this service.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://python-poetry.org/?ref=honstain.com"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Poetry - Python dependency management and packaging made easy</div><div class="kg-bookmark-description">Python dependency management and packaging made easy</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://python-poetry.org/images/favicon-origami-32.png" alt="Django Service with Gunicorn"><span class="kg-bookmark-author">Python dependency management and packaging made easy</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://python-poetry.org/images/logo-origami.svg" alt="Django Service with Gunicorn"></div></a></figure><p>This is the <code>pyproject.toml</code> for the repo <a href="https://github.com/AnthonyHonstain/django-user-service/blob/main/pyproject.toml?ref=honstain.com">https://github.com/AnthonyHonstain/django-user-service/blob/main/pyproject.toml</a></p><p>In the mamba environment you already have active:</p><pre><code>pip install poetry
poetry install --no-root</code></pre><h2 id="django-and-mypy">Django and mypy</h2><p>I set this Django service up using mypy, there was one thing I missed that you might also want to be aware of. Its not enough to just add mypy and django-stubs.</p><p>The mypy configuration file: <a href="https://github.com/AnthonyHonstain/django-user-service/blob/main/mypy.ini?ref=honstain.com">https://github.com/AnthonyHonstain/django-user-service/blob/main/mypy.ini</a></p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2024/03/image-10.png" class="kg-image" alt="Django Service with Gunicorn" loading="lazy" width="511" height="217"></figure><p>Without the mypy configuration I ran into a number of mypy errors. </p><p><strong>EXAMPLE ERRORS</strong></p><pre><code>&#x276F; mypy .
usercore/models.py:5: error: Need type annotation for &quot;name&quot;  [var-annotated]
usercore/models.py:6: error: Need type annotation for &quot;age&quot;  [var-annotated]
user_service/settings.py:30: error: Need type annotation for &quot;ALLOWED_HOSTS&quot; (hint: &quot;ALLOWED_HOSTS: List[&lt;type&gt;] = ...&quot;)  [var-annotated]
Found 3 errors in 2 files (checked 15 source files)</code></pre><p>We have the <code>django-stubs</code> dependency set, but we need to help mypy along with that mypy.ini file.</p><p>References:</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://mypy.readthedocs.io/en/stable/config_file.html?ref=honstain.com#config-file"><div class="kg-bookmark-content"><div class="kg-bookmark-title">The mypy configuration file - mypy 1.9.0 documentation</div><div class="kg-bookmark-description"></div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://mypy.readthedocs.io/favicon.ico" alt="Django Service with Gunicorn"><span class="kg-bookmark-author">mypy 1.9.0 documentation</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://mypy.readthedocs.io/en/stable/_static/mypy_light.svg" alt="Django Service with Gunicorn"></div></a></figure><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://www.ralphminderhoud.com/blog/django-mypy-check-runs/?ref=honstain.com"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Integrating mypy into a Django project | Ralph Minderhoud</div><div class="kg-bookmark-description"></div><div class="kg-bookmark-metadata"><span class="kg-bookmark-author">Ralph Minderhoud</span></div></div></a></figure><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/typeddjango/django-stubs?ref=honstain.com"><div class="kg-bookmark-content"><div class="kg-bookmark-title">GitHub - typeddjango/django-stubs: PEP-484 stubs for Django</div><div class="kg-bookmark-description">PEP-484 stubs for Django. Contribute to typeddjango/django-stubs development by creating an account on GitHub.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.githubassets.com/assets/pinned-octocat-093da3e6fa40.svg" alt="Django Service with Gunicorn"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">typeddjango</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/287f237780a1ff6a95af94a5287adf3d77e3405874f4b2d2b86c054183ff8e1d/typeddjango/django-stubs" alt="Django Service with Gunicorn"></div></a></figure><p></p><h2 id="intellij-and-django">IntelliJ and Django</h2><p>I will provide some examples of configuring IntelliJ to work with Django, I didn&apos;t find it super intuitive and hope this could help others.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://www.jetbrains.com/idea/?ref=honstain.com"><div class="kg-bookmark-content"><div class="kg-bookmark-title">IntelliJ IDEA &#x2013; the Leading Java and Kotlin IDE</div><div class="kg-bookmark-description">IntelliJ IDEA is undoubtedly the top-choice IDE for software developers. It makes Java and Kotlin development a more productive and enjoyable experience.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://www.jetbrains.com/apple-touch-icon.png?r=1234" alt="Django Service with Gunicorn"><span class="kg-bookmark-author">JetBrains</span><span class="kg-bookmark-publisher">JetBrains</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://resources.jetbrains.com/storage/products/intellij-idea/img/meta/preview.png" alt="Django Service with Gunicorn"></div></a></figure><p>You can skip this if you use an alternative editor.</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2024/02/image-22.png" class="kg-image" alt="Django Service with Gunicorn" loading="lazy" width="1013" height="410" srcset="https://honstain.com/content/images/size/w600/2024/02/image-22.png 600w, https://honstain.com/content/images/size/w1000/2024/02/image-22.png 1000w, https://honstain.com/content/images/2024/02/image-22.png 1013w" sizes="(min-width: 720px) 720px"></figure><h3 id="django-and-python-black">Django and Python Black</h3><p>I also found python Black to play nice with Django and have included an example screenshot of enabling the Black formatted automatically in IntelliJ. This automated the majority of the formatting activities.</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2024/02/image-20.png" class="kg-image" alt="Django Service with Gunicorn" loading="lazy" width="1003" height="710" srcset="https://honstain.com/content/images/size/w600/2024/02/image-20.png 600w, https://honstain.com/content/images/size/w1000/2024/02/image-20.png 1000w, https://honstain.com/content/images/2024/02/image-20.png 1003w" sizes="(min-width: 720px) 720px"></figure><p></p><h1 id="getting-django-running">Getting Django Running </h1><h2 id="standing-up-the-database">Standing up the database</h2><p>The core of this service is going to be a postgres database, which we will standup using docker compose.</p><p><code>docker-compose.yml</code></p>
<pre><code class="language-yaml">version: &apos;3.8&apos;

services:
  db:
    image: postgres:16.2
    volumes:
      - postgres_data_product:/var/lib/postgresql/data/
    environment:
      - POSTGRES_HOST_AUTH_METHOD=trust
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=postgres
    ports:
      - &apos;5432:5432&apos;
    # Logging for every query - can comment out the entire line to disable
    #     Reference: https://stackoverflow.com/a/58806511
    command: [&quot;postgres&quot;, &quot;-c&quot;, &quot;log_statement=all&quot;]

volumes:
  postgres_data_product:
</code></pre>
<p>Success looks like:</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2024/02/image-14.png" class="kg-image" alt="Django Service with Gunicorn" loading="lazy" width="901" height="220" srcset="https://honstain.com/content/images/size/w600/2024/02/image-14.png 600w, https://honstain.com/content/images/2024/02/image-14.png 901w" sizes="(min-width: 720px) 720px"></figure><pre><code>db-1 | 2024-02-27 15:02:00.353 UTC [1] LOG: database system is ready to accept connections</code></pre><p>Connect IntelliJ to the Database</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2024/02/image-15.png" class="kg-image" alt="Django Service with Gunicorn" loading="lazy" width="796" height="913" srcset="https://honstain.com/content/images/size/w600/2024/02/image-15.png 600w, https://honstain.com/content/images/2024/02/image-15.png 796w" sizes="(min-width: 720px) 720px"></figure><p>It should only require you to set the user and password from the docker-compose file.</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2024/02/image-16.png" class="kg-image" alt="Django Service with Gunicorn" loading="lazy" width="809" height="540" srcset="https://honstain.com/content/images/size/w600/2024/02/image-16.png 600w, https://honstain.com/content/images/2024/02/image-16.png 809w" sizes="(min-width: 720px) 720px"></figure><p>You won&apos;t have much to look at until you run the migrations.</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2024/02/image-17.png" class="kg-image" alt="Django Service with Gunicorn" loading="lazy" width="361" height="212"></figure><h3 id="run-the-django-migrations">Run the Django Migrations</h3><p>You can start running the <code>manage.py</code> commands to initialize your Django service.</p><pre><code>python manage.py migrate</code></pre><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2024/02/image-18.png" class="kg-image" alt="Django Service with Gunicorn" loading="lazy" width="627" height="384" srcset="https://honstain.com/content/images/size/w600/2024/02/image-18.png 600w, https://honstain.com/content/images/2024/02/image-18.png 627w"></figure><p>You can refresh the postgres schema in IntelliJ to view the results of the migration. Success looks like you now have a table called <code>usercore_user</code>.  </p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2024/02/image-19.png" class="kg-image" alt="Django Service with Gunicorn" loading="lazy" width="466" height="535"></figure><p>This is also a reasonable time to create a superuser for local development.</p><pre><code>python manage.py createsuperuser</code></pre><h1 id="overview-of-the-api">Overview of the API</h1><p>This service uses DRF <a href="https://www.django-rest-framework.org/?ref=honstain.com#installation">https://www.django-rest-framework.org/#installation</a> to serve as a basic REST API.</p><p>The model here is just a laughably basic user with three fields.</p><pre><code class="language-python">from django.db import models


class User(models.Model):
    name = models.CharField(max_length=200)
    age = models.IntegerField()</code></pre><p>And we will make a basic DRF ModelViewSet for it.</p><pre><code class="language-python">import structlog

from rest_framework import serializers, viewsets

from .models import User

logger = structlog.get_logger(__name__)


# Serializers define the API representation.
class UserSerializer(serializers.HyperlinkedModelSerializer):
    class Meta:
        model = User
        fields = [&quot;id&quot;, &quot;name&quot;, &quot;age&quot;]


# ViewSets define the view behavior.
class UserViewSet(viewsets.ModelViewSet):
    queryset = User.objects.all()
    serializer_class = UserSerializer

    def list(self, request, *args, **kwargs):
        logger.info(&quot;UserViewSet list called&quot;, size=str(len(self.queryset)))
        return super(UserViewSet, self).list(request, *args, **kwargs)

    def create(self, request, *args, **kwargs):
        logger.info(&quot;Creating a new user&quot;, body=request.data)
        return super(UserViewSet, self).create(request, *args, **kwargs)
</code></pre><p>Running the service <code>python manage.py runserver</code> if you haven&apos;t already will let us exercise the endpoint using the DRF UI.</p><p><a href="http://localhost:8000/usercore/users/?ref=honstain.com">http://localhost:8000/usercore/users/</a></p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2024/03/image.png" class="kg-image" alt="Django Service with Gunicorn" loading="lazy" width="1072" height="924" srcset="https://honstain.com/content/images/size/w600/2024/03/image.png 600w, https://honstain.com/content/images/size/w1000/2024/03/image.png 1000w, https://honstain.com/content/images/2024/03/image.png 1072w" sizes="(min-width: 720px) 720px"></figure><p>You also have the basic Django Admin UI <a href="http://localhost:8000/admin/?ref=honstain.com">http://localhost:8000/admin/</a></p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2024/03/image-3.png" class="kg-image" alt="Django Service with Gunicorn" loading="lazy" width="800" height="382" srcset="https://honstain.com/content/images/size/w600/2024/03/image-3.png 600w, https://honstain.com/content/images/2024/03/image-3.png 800w" sizes="(min-width: 720px) 720px"></figure><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2024/03/image-4.png" class="kg-image" alt="Django Service with Gunicorn" loading="lazy" width="1075" height="418" srcset="https://honstain.com/content/images/size/w600/2024/03/image-4.png 600w, https://honstain.com/content/images/size/w1000/2024/03/image-4.png 1000w, https://honstain.com/content/images/2024/03/image-4.png 1075w" sizes="(min-width: 720px) 720px"></figure><p></p><h2 id="intellij-http-client">IntelliJ HTTP Client</h2><p>I found the IntelliJ HTTP client to be a handy way to exercise simple endpoints, postman would also work here.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://www.jetbrains.com/help/idea/http-client-in-product-code-editor.html?ref=honstain.com"><div class="kg-bookmark-content"><div class="kg-bookmark-title">HTTP Client | IntelliJ&#xA0;IDEA</div><div class="kg-bookmark-description">Explore the features of the HTTP Client plugin: compose and execute HTTP requests, view responses, configure proxy settings, certificates, and more.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://jetbrains.com/apple-touch-icon.png" alt="Django Service with Gunicorn"><span class="kg-bookmark-author">IntelliJ&#xA0;IDEA Help</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://resources.jetbrains.com/storage/products/intellij-idea/img/meta/preview.png" alt="Django Service with Gunicorn"></div></a></figure><pre><code>### GET request list
GET http://localhost:8000/usercore/users/
Accept: application/json

### GET request single record
GET http://localhost:8000/usercore/users/40000/
Accept: application/json

### POST request create single record
POST http://localhost:8000/usercore/users/
Accept: application/json
Content-Type: application/json

{&quot;name&quot;:&quot;Anthony&quot;, &quot;age&quot;:2}</code></pre><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2024/03/image-2.png" class="kg-image" alt="Django Service with Gunicorn" loading="lazy" width="1342" height="1300" srcset="https://honstain.com/content/images/size/w600/2024/03/image-2.png 600w, https://honstain.com/content/images/size/w1000/2024/03/image-2.png 1000w, https://honstain.com/content/images/2024/03/image-2.png 1342w" sizes="(min-width: 720px) 720px"></figure><p></p><h1 id="starting-gunicorn">Starting Gunicorn</h1><p>The Django runserver is useful for local development, but we want to get a more capable web server more detailed investigation.</p><p>Since we should have already installed gunciorn with our python dependencies, we can start it using:</p><pre><code>gunicorn --log-level debug --bind 0.0.0.0:8000 user_service.wsgi -w 1</code></pre><p>This uses the wsgi module Django already created for us, and sets one default worker.</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2024/03/image-5.png" class="kg-image" alt="Django Service with Gunicorn" loading="lazy" width="869" height="503" srcset="https://honstain.com/content/images/size/w600/2024/03/image-5.png 600w, https://honstain.com/content/images/2024/03/image-5.png 869w" sizes="(min-width: 720px) 720px"></figure><p>Shooting a few calls into the server should look something like this:</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2024/03/image-6.png" class="kg-image" alt="Django Service with Gunicorn" loading="lazy" width="1810" height="173" srcset="https://honstain.com/content/images/size/w600/2024/03/image-6.png 600w, https://honstain.com/content/images/size/w1000/2024/03/image-6.png 1000w, https://honstain.com/content/images/size/w1600/2024/03/image-6.png 1600w, https://honstain.com/content/images/2024/03/image-6.png 1810w" sizes="(min-width: 720px) 720px"></figure><p>This service is also setup with structlog and emits JSON formatted logs to the <code>logs</code> folder in the project directory.</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2024/03/image-7.png" class="kg-image" alt="Django Service with Gunicorn" loading="lazy" width="599" height="321"></figure><pre><code class="language-json">tail -f json.log | jq


{
  &quot;request&quot;: &quot;GET /usercore/users/&quot;,
  &quot;user_agent&quot;: &quot;Apache-HttpClient/4.5.14 (Java/17.0.10)&quot;,
  &quot;event&quot;: &quot;request_started&quot;,
  &quot;ip&quot;: &quot;127.0.0.1&quot;,
  &quot;request_id&quot;: &quot;f4449541-2eea-4337-ad63-fb24de0c0d35&quot;,
  &quot;timestamp&quot;: &quot;2024-03-15T15:06:48.697754Z&quot;,
  &quot;logger&quot;: &quot;django_structlog.middlewares.request&quot;,
  &quot;level&quot;: &quot;info&quot;
}
{
  &quot;size&quot;: &quot;3&quot;,
  &quot;event&quot;: &quot;UserViewSet list called&quot;,
  &quot;ip&quot;: &quot;127.0.0.1&quot;,
  &quot;request_id&quot;: &quot;f4449541-2eea-4337-ad63-fb24de0c0d35&quot;,
  &quot;timestamp&quot;: &quot;2024-03-15T15:06:48.767477Z&quot;,
  &quot;logger&quot;: &quot;usercore.views&quot;,
  &quot;level&quot;: &quot;info&quot;
}
{
  &quot;code&quot;: 200,
  &quot;request&quot;: &quot;GET /usercore/users/&quot;,
  &quot;event&quot;: &quot;request_finished&quot;,
  &quot;ip&quot;: &quot;127.0.0.1&quot;,
  &quot;user_id&quot;: null,
  &quot;request_id&quot;: &quot;f4449541-2eea-4337-ad63-fb24de0c0d35&quot;,
  &quot;timestamp&quot;: &quot;2024-03-15T15:06:48.770240Z&quot;,
  &quot;logger&quot;: &quot;django_structlog.middlewares.request&quot;,
  &quot;level&quot;: &quot;info&quot;
}
</code></pre><h1 id="summary">Summary</h1><p>At this stage you should now have a Django service with DRF that can serve a basic API via Gunicorn (and has logging, tests, and a real PostgreSQL database).</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/AnthonyHonstain/django-user-service?ref=honstain.com"><div class="kg-bookmark-content"><div class="kg-bookmark-title">GitHub - AnthonyHonstain/django-user-service: An example Django 5.0 service with DRF, postgres, gunicorn.</div><div class="kg-bookmark-description">An example Django 5.0 service with DRF, postgres, gunicorn. - AnthonyHonstain/django-user-service</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.githubassets.com/assets/pinned-octocat-093da3e6fa40.svg" alt="Django Service with Gunicorn"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">AnthonyHonstain</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/304d3737c475e70ab218dc0b17d559156e075deffc62be48b058de916231d6c9/AnthonyHonstain/django-user-service" alt="Django Service with Gunicorn"></div></a></figure><p>In subsequent posts, we will explore more of having our <a href="https://honstain.com/asyncio-sqs-and-httpx/" rel="noreferrer">SQS consumer</a>, call this service.</p>]]></content:encoded></item><item><title><![CDATA[python asyncio SQS consumer]]></title><description><![CDATA[<p>In this post we are going to stand up a basic Python SQS consumer that leverages asyncio and experiments with some different types of workloads.</p><p>Why would you be interested in this post?</p><ul><li>You&apos;re working in Python and want to create a service to consume from SQS.</li><li>You&</li></ul>]]></description><link>https://honstain.com/asyncio-sqs-and-httpx/</link><guid isPermaLink="false">65b67e186360b628adeb423f</guid><dc:creator><![CDATA[Anthony Honstain]]></dc:creator><pubDate>Sun, 18 Feb 2024 16:02:19 GMT</pubDate><media:content url="https://honstain.com/content/images/2024/02/2024-02-01_SQS-consumer_Feature_image.png" medium="image"/><content:encoded><![CDATA[<img src="https://honstain.com/content/images/2024/02/2024-02-01_SQS-consumer_Feature_image.png" alt="python asyncio SQS consumer"><p>In this post we are going to stand up a basic Python SQS consumer that leverages asyncio and experiments with some different types of workloads.</p><p>Why would you be interested in this post?</p><ul><li>You&apos;re working in Python and want to create a service to consume from SQS.</li><li>You&apos;re interested in an application of asyncio (in a Python 3.12 context).</li><li>You&apos;re interested in seeing HTTPX used for an IO-bound workload (take an SQS message and process several HTTP calls).</li></ul><div class="kg-card kg-callout-card kg-callout-card-blue"><div class="kg-callout-emoji">&#x1F4A1;</div><div class="kg-callout-text">Source code for the consumer discussed in this post: <a href="https://github.com/AnthonyHonstain/sqs-async-python-consumer?ref=honstain.com">https://github.com/AnthonyHonstain/sqs-async-python-consumer</a></div></div><p>This service was created with the idea that you would ultimately run it in something like Kubernetes. In this post, we will stop significantly short of containerizing and running in a cluster.</p><h2 id="overview-of-key-dependencies">Overview of Key Dependencies</h2><p>I will provide an overview of the interesting dependencies I selected for this service. I tried to articulate the reasoning/idea behind why I used what, but all of this in the context of trying to construct a basic service that would run in a container, that could process SQS messages and make HTTP calls.</p><p>The really notable stuff in this service</p><ul><li><strong>aiobotocore </strong>2.8.0 <a href="https://github.com/aio-libs/aiobotocore?ref=honstain.com">https://github.com/aio-libs/aiobotocore</a> and <a href="https://aiobotocore.readthedocs.io/en/latest/?ref=honstain.com">https://aiobotocore.readthedocs.io/en/latest/</a><ul><li>The dependency I struggled the most with. I was looking for an asynchronous way to get SQS messages.<ul><li>None of this should be a criticism of this project, I think it is fairly unclear if you&apos;re new to Python+SQS what the best options for a client are. At least when I compare it to other clients, like picking an HTTP client where you find clear leaders to pick from.</li><li>Discussions on this are probably pretty similar to other discussions in the Python community about how to deal with libraries that don&apos;t support asyncio async/await patterns. Resulting in these replicas of the original non-async project.</li><li>This project looks to be reasonably maintained, and it seemed very unlikely the boto was going to get asyncio support.</li></ul></li></ul></li><li><strong>httpx </strong>0.24.2 <a href="https://www.python-httpx.org/?ref=honstain.com">https://www.python-httpx.org/</a><ul><li>I haven&apos;t tried other client&apos;s asyncio support, on cursory review I was pleased with the documentation and never hit any roadblocks. It just worked and the code was clean wrt async/await.</li></ul></li><li><strong>pydantic </strong>2.5.2 <a href="https://docs.pydantic.dev/latest/?ref=honstain.com">https://docs.pydantic.dev/latest/</a><ul><li>I used this to model the JSON coming out of SQS and the request/response from the HTTPX calls. If I had to do it over again I might go with dataclasses, but I was partially using this as an opportunity to explore pydantic.</li></ul></li><li><strong>python </strong>3.12 <a href="https://www.python.org/downloads/release/python-3120/?ref=honstain.com">https://www.python.org/downloads/release/python-3120/</a> <ul><li>It would be fair criticism that I don&apos;t effectively leverage new language features (when applicable).</li></ul></li></ul><p>Less critical things:</p><ul><li><strong>python-json-logger</strong> 2.0.7 <a href="https://github.com/madzak/python-json-logger?ref=honstain.com">https://github.com/madzak/python-json-logger</a><ul><li>This helped me structure the logs in JSON.</li></ul></li></ul><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2024/02/image-2.png" class="kg-image" alt="python asyncio SQS consumer" loading="lazy" width="1118" height="228" srcset="https://honstain.com/content/images/size/w600/2024/02/image-2.png 600w, https://honstain.com/content/images/size/w1000/2024/02/image-2.png 1000w, https://honstain.com/content/images/2024/02/image-2.png 1118w" sizes="(min-width: 720px) 720px"></figure><ul><li><strong>black </strong>23.11.0 <a href="https://black.readthedocs.io/en/stable/?ref=honstain.com">https://black.readthedocs.io/en/stable/</a><ul><li>I have been getting a ton of value here, ymmv. I love that it consistently and with no effort on my part produces very readable and organized formatting.</li></ul></li><li><strong>mypy </strong>1.7.1 <a href="https://mypy.readthedocs.io/en/stable/?ref=honstain.com">https://mypy.readthedocs.io/en/stable/</a><ul><li>static type checker - I still bump into odd things, but have found the juice worth the squeeze as more developers come into a project.</li></ul></li><li><strong>respx </strong>0.20.2 <a href="https://lundberg.github.io/respx/?ref=honstain.com">https://lundberg.github.io/respx/</a><ul><li>Used for mocking out HTTPX. I found it pretty helpful, but saw some peers get bound up by not understanding the scope of it.</li></ul></li></ul><h2 id="service-setuppython-and-poetry">Service Setup - Python and Poetry</h2><p>There are many ways to manage dependencies and Python environments, I won&apos;t argue this is the best. But it has worked well for me developing on Ubuntu where I do most of the actual coding in IntelliJ and have multiple projects with different Python versions (some having more complicated dependencies).</p><h3 id="requirements-you-should-have-this-already"><strong>Requirements (you should have this already)</strong></h3><ul><li>Mamba installed (in my case I went for micromamba) <a href="https://mamba.readthedocs.io/en/latest/user_guide/micromamba.html?ref=honstain.com">https://mamba.readthedocs.io/en/latest/user_guide/micromamba.html</a></li></ul><h3 id="setup-the-mamba-environment">Setup the mamba environment</h3><pre><code>mamba create -n sqs-async-consumer -c conda-forge  python=3.12
</code></pre>
<figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2024/01/image-1.png" class="kg-image" alt="python asyncio SQS consumer" loading="lazy" width="642" height="851" srcset="https://honstain.com/content/images/size/w600/2024/01/image-1.png 600w, https://honstain.com/content/images/2024/01/image-1.png 642w"></figure><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2024/01/image.png" class="kg-image" alt="python asyncio SQS consumer" loading="lazy" width="544" height="251"></figure><h3 id="optional-tip-after-mamba-environment-creation"><strong>Optional Tip After Mamba Environment Creation</strong></h3><p>If you use ZSH, you can modify your .zshrc file to automatically set the environment when you navigate to the directory for the project.</p><pre><code class="language-bash">function mamba_auto_activate() {
    if [ &quot;$(pwd)&quot; = &quot;/home/dev/Desktop/python/sqs-async-consumer&quot; ]; then
        mamba activate sqs-async-consumer
    fi
}
chpwd_functions+=(&quot;mamba_auto_activate&quot;)
</code></pre>
<h3 id="install-poetry">Install Poetry</h3><p>I think there are a number of ways to install poetry <a href="https://python-poetry.org/?ref=honstain.com">https://python-poetry.org/</a></p><p>I am really happy with poetry (relative to pip) for tracking my python services dependencies, versioning, and locking dependencies.</p><p>I have had much less success trying to use poetry for virtualization (hence me still leaning on mamba).  This could just be my own failing or lack of sufficient knowledge + research into poetry and how best to use it. When I have tried to run pure poetry, I get tangled up with IntelliJ not being able to correctly interact with the environment (important for me since I prefer to use Intellij to run/test/debug the service during my more normal development cycle).</p><h3 id="install-poetry-dependencies">Install Poetry Dependencies</h3><p>This will install aiobotocore and all the other stuff we need for the service to run.</p><pre><code>poetry install
</code></pre>
<p>The <code>pyproject.toml</code> is a critical file to review <a href="https://github.com/AnthonyHonstain/sqs-async-python-consumer/blob/main/pyproject.toml?ref=honstain.com">https://github.com/AnthonyHonstain/sqs-async-python-consumer/blob/main/pyproject.toml</a></p><pre><code class="language-yaml">[tool.poetry]
name = &quot;sqs-consumer-project&quot;
version = &quot;0.1.0&quot;
description = &quot;&quot;
authors = [&quot;Your Name &lt;you@example.com&gt;&quot;]
readme = &quot;README.md&quot;

[tool.poetry.dependencies]
python = &quot;^3.12&quot;
aiobotocore = &quot;^2.8.0&quot;
pydantic = &quot;^2.5.2&quot;
black = &quot;^23.11.0&quot;
httpx = &quot;^0.25.2&quot;
mypy = &quot;^1.7.1&quot;
python-json-logger = &quot;^2.0.7&quot;

[tool.poetry.group.dev.dependencies]
pytest = &quot;^7.4.3&quot;
pytest-asyncio = &quot;^0.23.2&quot;
respx = &quot;^0.20.2&quot;

[build-system]
requires = [&quot;poetry-core&quot;]
build-backend = &quot;poetry.core.masonry.api&quot;

# Black configuration
[tool.black]
line-length = 120

</code></pre>
<p>I also found it helpful to check the config</p><pre><code>poetry config --list
</code></pre>
<figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2024/02/image.png" class="kg-image" alt="python asyncio SQS consumer" loading="lazy" width="629" height="283" srcset="https://honstain.com/content/images/size/w600/2024/02/image.png 600w, https://honstain.com/content/images/2024/02/image.png 629w"></figure><h3 id="docker-compose-wiremock-and-localstack">Docker Compose wiremock and localstack</h3><p>We will use docker compose to stand up wiremock and localstack for development and testing. </p><ul><li><a href="https://www.localstack.cloud/?ref=honstain.com" rel="noreferrer">localstack </a>is used for getting a local version of AWS&apos;s SQS product.<ul><li>Note that this docker-compose.yml also contains a basic init of localstack to automatically create the queue you want for local development.</li></ul></li><li><a href="https://wiremock.org/?ref=honstain.com" rel="noreferrer">wiremock</a> is used for proving a mock HTTP service<ul><li>Note - it is created with some starter mappings</li></ul></li></ul><div class="kg-card kg-callout-card kg-callout-card-yellow"><div class="kg-callout-emoji">&#x1F4A1;</div><div class="kg-callout-text">I ran into issues with some versions of localstack images when I was lazily trying to use latest.</div></div><p>Source for the docker-compose <a href="https://github.com/AnthonyHonstain/sqs-async-python-consumer/blob/main/docker-compose.yml?ref=honstain.com">https://github.com/AnthonyHonstain/sqs-async-python-consumer/blob/main/docker-compose.yml</a></p><p>This is the <code>docker-compose.yml</code> file for the service</p><pre><code class="language-yaml">version: &apos;3.8&apos;

services:
  localstack:
    image: localstack/localstack:3.1.0
    ports:
      - &quot;4566:4566&quot; # LocalStack&apos;s default edge port
      - &quot;4571:4571&quot; # Deprecated port, but can be included for backward compatibility
    environment:
      - SERVICES=sqs
      #- DEBUG=1
      - DATA_DIR=/tmp/localstack/data
    volumes:
      # https://docs.localstack.cloud/getting-started/installation/#docker-compose
      - &quot;${LOCALSTACK_VOLUME_DIR:-./volume}:/var/lib/localstack&quot;

  wiremock:
    image: wiremock/wiremock:3.3.1-2
    ports:
      - &quot;8080:8080&quot; # Default Wiremock port
    volumes:
      - ./wiremock:/home/wiremock
    command: --verbose

  localstack-init:
    image: amazon/aws-cli:2.15.15
    depends_on:
      - localstack
    environment:
      AWS_ACCESS_KEY_ID: &apos;test&apos;
      AWS_SECRET_ACCESS_KEY: &apos;test&apos;
      AWS_DEFAULT_REGION: &apos;us-east-1&apos;
    volumes:
      - ./init-localstack.sh:/init-localstack.sh  # Corrected volume mount
    entrypoint: /bin/sh
    command: -c &quot;/init-localstack.sh&quot;
</code></pre>
<figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2024/02/image-3.png" class="kg-image" alt="python asyncio SQS consumer" loading="lazy" width="1276" height="966" srcset="https://honstain.com/content/images/size/w600/2024/02/image-3.png 600w, https://honstain.com/content/images/size/w1000/2024/02/image-3.png 1000w, https://honstain.com/content/images/2024/02/image-3.png 1276w" sizes="(min-width: 720px) 720px"></figure><h3 id="setting-up-intellij">Setting up IntelliJ</h3><p>I use the Ultimate version, but PyCharm should be fine. <a href="https://www.jetbrains.com/?ref=honstain.com">https://www.jetbrains.com/</a></p><p>Open module settings</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2024/02/image-8.png" class="kg-image" alt="python asyncio SQS consumer" loading="lazy" width="939" height="960" srcset="https://honstain.com/content/images/size/w600/2024/02/image-8.png 600w, https://honstain.com/content/images/2024/02/image-8.png 939w" sizes="(min-width: 720px) 720px"></figure><p>Configure an SDK</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2024/02/image-7.png" class="kg-image" alt="python asyncio SQS consumer" loading="lazy" width="959" height="386" srcset="https://honstain.com/content/images/size/w600/2024/02/image-7.png 600w, https://honstain.com/content/images/2024/02/image-7.png 959w" sizes="(min-width: 720px) 720px"></figure><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2024/02/image-9.png" class="kg-image" alt="python asyncio SQS consumer" loading="lazy" width="747" height="362" srcset="https://honstain.com/content/images/size/w600/2024/02/image-9.png 600w, https://honstain.com/content/images/2024/02/image-9.png 747w" sizes="(min-width: 720px) 720px"></figure><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2024/02/image-10.png" class="kg-image" alt="python asyncio SQS consumer" loading="lazy" width="1146" height="827" srcset="https://honstain.com/content/images/size/w600/2024/02/image-10.png 600w, https://honstain.com/content/images/size/w1000/2024/02/image-10.png 1000w, https://honstain.com/content/images/2024/02/image-10.png 1146w" sizes="(min-width: 720px) 720px"></figure><p>Example IntelliJ configuration for running locally: </p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2024/02/image-6.png" class="kg-image" alt="python asyncio SQS consumer" loading="lazy" width="1191" height="526" srcset="https://honstain.com/content/images/size/w600/2024/02/image-6.png 600w, https://honstain.com/content/images/size/w1000/2024/02/image-6.png 1000w, https://honstain.com/content/images/2024/02/image-6.png 1191w" sizes="(min-width: 720px) 720px"></figure><p>Example IntelliJ configuration for running all tests:</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2024/02/image-5.png" class="kg-image" alt="python asyncio SQS consumer" loading="lazy" width="1213" height="972" srcset="https://honstain.com/content/images/size/w600/2024/02/image-5.png 600w, https://honstain.com/content/images/size/w1000/2024/02/image-5.png 1000w, https://honstain.com/content/images/2024/02/image-5.png 1213w" sizes="(min-width: 720px) 720px"></figure><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2024/02/image-11.png" class="kg-image" alt="python asyncio SQS consumer" loading="lazy" width="782" height="406" srcset="https://honstain.com/content/images/size/w600/2024/02/image-11.png 600w, https://honstain.com/content/images/2024/02/image-11.png 782w" sizes="(min-width: 720px) 720px"></figure><h3 id="run-tests">Run Tests</h3><p>This can be done with <code>poetry run pytest</code> </p><p>Start the required dependencies </p><div class="kg-card kg-callout-card kg-callout-card-yellow"><div class="kg-callout-emoji">&#x1F4A1;</div><div class="kg-callout-text">These tests require the docker container we depend on (localstack and wiremock) to be running, they are not automatically started or run by the test suite.</div></div><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2024/02/image-4.png" class="kg-image" alt="python asyncio SQS consumer" loading="lazy" width="919" height="489" srcset="https://honstain.com/content/images/size/w600/2024/02/image-4.png 600w, https://honstain.com/content/images/2024/02/image-4.png 919w" sizes="(min-width: 720px) 720px"></figure><h3 id="run-the-service-locally">Run The Service Locally</h3><p>You can use IntelliJ with the setting provided previously in this guide, or you can use the command line: <code>poetry run python sqs_consumer_project/sqs_consumer.py</code></p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2024/02/image-12.png" class="kg-image" alt="python asyncio SQS consumer" loading="lazy" width="1408" height="335" srcset="https://honstain.com/content/images/size/w600/2024/02/image-12.png 600w, https://honstain.com/content/images/size/w1000/2024/02/image-12.png 1000w, https://honstain.com/content/images/2024/02/image-12.png 1408w" sizes="(min-width: 720px) 720px"></figure><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://honstain.com/content/images/2024/02/image-13.png" class="kg-image" alt="python asyncio SQS consumer" loading="lazy" width="936" height="583" srcset="https://honstain.com/content/images/size/w600/2024/02/image-13.png 600w, https://honstain.com/content/images/2024/02/image-13.png 936w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">The docker compose up output - you can see the SQS receive and delete</span></figcaption></figure><p>You can use <a href="https://jqlang.github.io/jq/?ref=honstain.com" rel="noreferrer">jq</a> to format the results: <code>poetry run python sqs_consumer_project/sqs_consumer.py 2&gt;&amp;1 | jq</code></p><pre><code class="language-json">&#x276F; poetry run python sqs_consumer_project/sqs_consumer.py          

{&quot;asctime&quot;: &quot;2024-02-18 07:46:12,742&quot;, &quot;name&quot;: &quot;root&quot;, &quot;levelname&quot;: &quot;INFO&quot;, &quot;message&quot;: &quot;Polling for messages&quot;, &quot;taskName&quot;: &quot;Task-3&quot;}
{&quot;asctime&quot;: &quot;2024-02-18 07:46:12,744&quot;, &quot;name&quot;: &quot;root&quot;, &quot;levelname&quot;: &quot;INFO&quot;, &quot;message&quot;: &quot;Polling for messages&quot;, &quot;taskName&quot;: &quot;Task-2&quot;}
{&quot;asctime&quot;: &quot;2024-02-18 07:46:12,748&quot;, &quot;name&quot;: &quot;root&quot;, &quot;levelname&quot;: &quot;INFO&quot;, &quot;message&quot;: &quot;receive_messages got messages&quot;, &quot;taskName&quot;: &quot;Task-3&quot;, &quot;message_count&quot;: 1}
{&quot;asctime&quot;: &quot;2024-02-18 07:46:12,748&quot;, &quot;name&quot;: &quot;root&quot;, &quot;levelname&quot;: &quot;INFO&quot;, &quot;message&quot;: &quot;Starting MessageId processing&quot;, &quot;taskName&quot;: &quot;Task-3&quot;, &quot;message_id&quot;: &quot;a4f2d436-36f3-408d-a747-f14687e103a1&quot;}
{&quot;asctime&quot;: &quot;2024-02-18 07:46:12,748&quot;, &quot;name&quot;: &quot;root&quot;, &quot;levelname&quot;: &quot;INFO&quot;, &quot;message&quot;: &quot;pydantic model&quot;, &quot;taskName&quot;: &quot;Task-3&quot;, &quot;sqs_message&quot;: &quot;name=&apos;Anthony&apos; age=2&quot;}
{&quot;asctime&quot;: &quot;2024-02-18 07:46:12,748&quot;, &quot;name&quot;: &quot;root&quot;, &quot;levelname&quot;: &quot;INFO&quot;, &quot;message&quot;: &quot;Started work&quot;, &quot;taskName&quot;: &quot;Task-3&quot;, &quot;message_name&quot;: &quot;Anthony&quot;, &quot;message_age&quot;: 2}
{&quot;asctime&quot;: &quot;2024-02-18 07:46:14,750&quot;, &quot;name&quot;: &quot;root&quot;, &quot;levelname&quot;: &quot;INFO&quot;, &quot;message&quot;: &quot;Polling for messages&quot;, &quot;taskName&quot;: &quot;Task-2&quot;}
{&quot;asctime&quot;: &quot;2024-02-18 07:46:16,761&quot;, &quot;name&quot;: &quot;root&quot;, &quot;levelname&quot;: &quot;INFO&quot;, &quot;message&quot;: &quot;Polling for messages&quot;, &quot;taskName&quot;: &quot;Task-2&quot;}
{&quot;asctime&quot;: &quot;2024-02-18 07:46:16,945&quot;, &quot;name&quot;: &quot;httpx&quot;, &quot;levelname&quot;: &quot;INFO&quot;, &quot;message&quot;: &quot;HTTP Request: POST http://localhost:8080/record_user \&quot;HTTP/1.1 200 OK\&quot;&quot;, &quot;taskName&quot;: &quot;Task-3&quot;}
{&quot;asctime&quot;: &quot;2024-02-18 07:46:16,946&quot;, &quot;name&quot;: &quot;root&quot;, &quot;levelname&quot;: &quot;INFO&quot;, &quot;message&quot;: &quot;Received response&quot;, &quot;taskName&quot;: &quot;Task-3&quot;, &quot;user_id&quot;: &quot;xxxxxx&quot;}
{&quot;asctime&quot;: &quot;2024-02-18 07:46:16,946&quot;, &quot;name&quot;: &quot;root&quot;, &quot;levelname&quot;: &quot;INFO&quot;, &quot;message&quot;: &quot;Completed work&quot;, &quot;taskName&quot;: &quot;Task-3&quot;, &quot;message_name&quot;: &quot;Anthony&quot;, &quot;message_age&quot;: 3, &quot;user_id&quot;: &quot;xxxxxx&quot;}
{&quot;asctime&quot;: &quot;2024-02-18 07:46:16,958&quot;, &quot;name&quot;: &quot;root&quot;, &quot;levelname&quot;: &quot;INFO&quot;, &quot;message&quot;: &quot;Polling for messages&quot;, &quot;taskName&quot;: &quot;Task-3&quot;}
{&quot;asctime&quot;: &quot;2024-02-18 07:46:18,766&quot;, &quot;name&quot;: &quot;root&quot;, &quot;levelname&quot;: &quot;INFO&quot;, &quot;message&quot;: &quot;Polling for messages&quot;, &quot;taskName&quot;: &quot;Task-2&quot;}
{&quot;asctime&quot;: &quot;2024-02-18 07:46:18,963&quot;, &quot;name&quot;: &quot;root&quot;, &quot;levelname&quot;: &quot;INFO&quot;, &quot;message&quot;: &quot;Polling for messages&quot;, &quot;taskName&quot;: &quot;Task-3&quot;}
{&quot;asctime&quot;: &quot;2024-02-18 07:46:18,988&quot;, &quot;name&quot;: &quot;root&quot;, &quot;levelname&quot;: &quot;ERROR&quot;, &quot;message&quot;: &quot;Cancel Error&quot;, &quot;taskName&quot;: &quot;Task-2&quot;}
{&quot;asctime&quot;: &quot;2024-02-18 07:46:18,988&quot;, &quot;name&quot;: &quot;root&quot;, &quot;levelname&quot;: &quot;INFO&quot;, &quot;message&quot;: &quot;Finished&quot;, &quot;taskName&quot;: &quot;Task-2&quot;}
{&quot;asctime&quot;: &quot;2024-02-18 07:46:18,988&quot;, &quot;name&quot;: &quot;root&quot;, &quot;levelname&quot;: &quot;ERROR&quot;, &quot;message&quot;: &quot;Cancel Error&quot;, &quot;taskName&quot;: &quot;Task-3&quot;}
{&quot;asctime&quot;: &quot;2024-02-18 07:46:18,988&quot;, &quot;name&quot;: &quot;root&quot;, &quot;levelname&quot;: &quot;INFO&quot;, &quot;message&quot;: &quot;Finished&quot;, &quot;taskName&quot;: &quot;Task-3&quot;}
{&quot;asctime&quot;: &quot;2024-02-18 07:46:19,000&quot;, &quot;name&quot;: &quot;root&quot;, &quot;levelname&quot;: &quot;INFO&quot;, &quot;message&quot;: &quot;Script interrupted by user&quot;, &quot;taskName&quot;: null}
</code></pre>
<p>The important sequence of work that one of the tasks will go through consuming SQS messages:</p>
<ul>
<li><code>&quot;message&quot;: &quot;receive_messages got messages&quot;</code> We have a message to work from SQS.</li>
<li><code>&quot;message&quot;: &quot;Started work&quot;</code> We are going to make the HTTP call (we expect a delay).</li>
<li>You will see other work happening in the service during this time</li>
<li><code>&quot;message&quot;: &quot;Received response&quot;</code> We got an HTTP response</li>
<li><code>&quot;message&quot;: &quot;Completed work&quot;</code> We successfully completed our work on the message and will signal it can be deleted.</li>
</ul>
<h1 id="summary">Summary</h1><p>We provided the outline of a service that could consume SQS messages and make HTTP calls and got basic functionality working. We also got some rudimentary logging and testing established.</p><p>You can get the whole repo on github if you want to pull it down to modify or review. This service is missing some things you would want before using it in a production context (configuration and more robust testing being notable). We may take this service and experiment with HTTPX/concurrency in subsequent posts.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/AnthonyHonstain/sqs-async-python-consumer?ref=honstain.com"><div class="kg-bookmark-content"><div class="kg-bookmark-title">GitHub - AnthonyHonstain/sqs-async-python-consumer: An example service written in Python using asyncio that can consume SQS messages and execute HTTP calls based on those messages.</div><div class="kg-bookmark-description">An example service written in Python using asyncio that can consume SQS messages and execute HTTP calls based on those messages. - GitHub - AnthonyHonstain/sqs-async-python-consumer: An example ser&#x2026;</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.githubassets.com/assets/pinned-octocat-093da3e6fa40.svg" alt="python asyncio SQS consumer"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">AnthonyHonstain</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/6fa9c30e15ea8dfde8b023c76ed39555ef7ae311b3143c85cba19f3a6ffbaa27/AnthonyHonstain/sqs-async-python-consumer" alt="python asyncio SQS consumer"></div></a></figure>]]></content:encoded></item><item><title><![CDATA[Spring and Redis Streams Intro]]></title><description><![CDATA[<p></p><p>This will be a quick walkthrough of standing up a very basic Spring Boot Kotlin service that can consume from a Redis Stream. We will also take a brief look at using RedisInsight as part of our local docker setup.</p><p>What you will get in this post:</p>
<ul>
<li>Standup Redis 6</li></ul>]]></description><link>https://honstain.com/spring-boot-and-redis-streams-intro/</link><guid isPermaLink="false">65b52aaf7a5d430e36b8ec8d</guid><category><![CDATA[Docker]]></category><category><![CDATA[Java]]></category><category><![CDATA[Redis]]></category><category><![CDATA[Streams]]></category><category><![CDATA[Spring Boot]]></category><dc:creator><![CDATA[Anthony Honstain]]></dc:creator><pubDate>Wed, 23 Feb 2022 16:16:11 GMT</pubDate><content:encoded><![CDATA[<p></p><p>This will be a quick walkthrough of standing up a very basic Spring Boot Kotlin service that can consume from a Redis Stream. We will also take a brief look at using RedisInsight as part of our local docker setup.</p><p>What you will get in this post:</p>
<ul>
<li>Standup Redis 6 using docker-compose</li>
<li>Create a basic Spring Boot service (we are going to skip logging/alerting/testing)</li>
<li>Create a consumer for the Redis stream</li>
<li>Manually publish to the stream and inspect the state of the stream using RedisInsight.</li>
</ul>
<p>This isn&apos;t meant to be a production system, it&apos;s really just a prototype used for experimenting with Redis Streams and the corresponding spring-data-redis+Lettuce libraries.</p>
<p>Key technologies at play here:</p>
<ul>
<li>Kotlin 1.6.10 <a href="https://kotlinlang.org/docs/releases.html?ref=honstain.com#release-details">https://kotlinlang.org/docs/releases.html#release-details</a></li>
<li>Spring Boot version 2.6.3 <a href="https://spring.io/blog/2022/01/20/spring-boot-2-6-3-is-now-available?ref=honstain.com">https://spring.io/blog/2022/01/20/spring-boot-2-6-3-is-now-available</a></li>
<li>spring-data-redis <a href="https://spring.io/projects/spring-data-redis?ref=honstain.com">https://spring.io/projects/spring-data-redis</a></li>
<li>Redis Streams via Redis 6.2.6 <a href="https://raw.githubusercontent.com/redis/redis/6.2/00-RELEASENOTES?ref=honstain.com">https://raw.githubusercontent.com/redis/redis/6.2/00-RELEASENOTES</a></li>
<li>redisinsight <a href="https://redis.com/redis-enterprise/redis-insight/?ref=honstain.com">https://redis.com/redis-enterprise/redis-insight/</a></li>
<li>Docker <a href="https://docs.docker.com/engine/?ref=honstain.com">https://docs.docker.com/engine/</a></li>
<li>docker-compose <a href="https://docs.docker.com/compose/?ref=honstain.com">https://docs.docker.com/compose/</a></li>
<li>NOTE - the operating system I used was Ubuntu 20.04.2</li>
</ul>
<hr><h2 id="setup-redis-streams-with-docker-compose">Setup Redis Streams with Docker Compose</h2><p>Starting with a very basic docker-compose script</p><pre><code class="language-yaml">version: &quot;3.9&quot;
# https://docs.docker.com/compose/compose-file/compose-versioning/

services:

  redis:
    # Reference:
    #   https://hub.docker.com/_/redis
    hostname: redis
    image: &quot;redis:alpine&quot;
    ports:
      - &quot;6379:6379&quot;

  redisinsight:
    # Reference:
    #   https://docs.redis.com/latest/ri/installing/install-docker/
    #
    # REMEMBER - to connect to the redis database, use the host: &quot;redis&quot;
    image: &quot;redislabs/redisinsight:latest&quot;
    ports:
      - &quot;8001:8001&quot;
</code></pre>
<p>Startup with <code>docker-compose up</code></p><pre><code class="language-text">&#x276F; docker-compose up
Starting streams_redis_1        ... done
Starting streams_redisinsight_1 ... done
Attaching to streams_redisinsight_1, streams_redis_1
redisinsight_1  | Process 9 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid)
redis_1         | 1:C 13 Feb 2022 16:40:04.068 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_1         | 1:C 13 Feb 2022 16:40:04.068 # Redis version=6.2.6, bits=64, commit=00000000, modified=0, pid=1, just started
redis_1         | 1:C 13 Feb 2022 16:40:04.068 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis_1         | 1:M 13 Feb 2022 16:40:04.068 * monotonic clock: POSIX clock_gettime
redis_1         | 1:M 13 Feb 2022 16:40:04.069 * Running mode=standalone, port=6379.
redis_1         | 1:M 13 Feb 2022 16:40:04.069 # Server initialized
redis_1         | 1:M 13 Feb 2022 16:40:04.069 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add &apos;vm.overcommit_memory = 1&apos; to /etc/sysctl.conf and then reboot or run the command &apos;sysctl vm.overcommit_memory=1&apos; for this to take effect.
redis_1         | 1:M 13 Feb 2022 16:40:04.070 * Loading RDB produced by version 6.2.6
redis_1         | 1:M 13 Feb 2022 16:40:04.070 * RDB age 63 seconds
redis_1         | 1:M 13 Feb 2022 16:40:04.070 * RDB memory usage when created 0.77 Mb
redis_1         | 1:M 13 Feb 2022 16:40:04.070 # Done loading RDB, keys loaded: 0, keys expired: 0.
redis_1         | 1:M 13 Feb 2022 16:40:04.070 * DB loaded from disk: 0.000 seconds
redis_1         | 1:M 13 Feb 2022 16:40:04.070 * Ready to accept connections
</code></pre>
<p>You should now be able to connect to redisinsight <a href="http://localhost:8001/?ref=honstain.com">http://localhost:8001/</a></p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2022/02/image-10.png" class="kg-image" alt loading="lazy" width="1008" height="1118" srcset="https://honstain.com/content/images/size/w600/2022/02/image-10.png 600w, https://honstain.com/content/images/size/w1000/2022/02/image-10.png 1000w, https://honstain.com/content/images/2022/02/image-10.png 1008w" sizes="(min-width: 720px) 720px"></figure><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2022/02/image-11.png" class="kg-image" alt loading="lazy" width="949" height="500" srcset="https://honstain.com/content/images/size/w600/2022/02/image-11.png 600w, https://honstain.com/content/images/2022/02/image-11.png 949w" sizes="(min-width: 720px) 720px"></figure><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2022/02/image-13.png" class="kg-image" alt loading="lazy" width="907" height="988" srcset="https://honstain.com/content/images/size/w600/2022/02/image-13.png 600w, https://honstain.com/content/images/2022/02/image-13.png 907w" sizes="(min-width: 720px) 720px"></figure><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2022/02/image-14.png" class="kg-image" alt loading="lazy" width="1101" height="858" srcset="https://honstain.com/content/images/size/w600/2022/02/image-14.png 600w, https://honstain.com/content/images/size/w1000/2022/02/image-14.png 1000w, https://honstain.com/content/images/2022/02/image-14.png 1101w" sizes="(min-width: 720px) 720px"></figure><p>Lets create a stream using the redis-cli</p><pre><code class="language-bash">&#x276F; docker ps
CONTAINER ID   IMAGE                           COMMAND                  CREATED          STATUS         PORTS                                       NAMES
aca29cb3864a   redis:alpine                    &quot;docker-entrypoint.s&#x2026;&quot;   11 minutes ago   Up 4 minutes   0.0.0.0:6379-&gt;6379/tcp, :::6379-&gt;6379/tcp   streams_redis_1
a5f043666ef5   redislabs/redisinsight:latest   &quot;bash ./docker-entry&#x2026;&quot;   11 minutes ago   Up 4 minutes   0.0.0.0:8001-&gt;8001/tcp, :::8001-&gt;8001/tcp   streams_redisinsight_1

&#x276F; docker exec -it aca29cb3864a sh
/data # redis-cli
127.0.0.1:6379&gt; XADD mystream * sensor-id 1234 temperature 14.0
&quot;1644771139891-0&quot;
127.0.0.1:6379&gt; XGROUP CREATE mystream mygroup $
OK
</code></pre>
<p>We can now see the stream in redisinsight</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2022/02/image-15.png" class="kg-image" alt loading="lazy" width="1100" height="863" srcset="https://honstain.com/content/images/size/w600/2022/02/image-15.png 600w, https://honstain.com/content/images/size/w1000/2022/02/image-15.png 1000w, https://honstain.com/content/images/2022/02/image-15.png 1100w" sizes="(min-width: 720px) 720px"></figure><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2022/02/image-16.png" class="kg-image" alt loading="lazy" width="854" height="734" srcset="https://honstain.com/content/images/size/w600/2022/02/image-16.png 600w, https://honstain.com/content/images/2022/02/image-16.png 854w" sizes="(min-width: 720px) 720px"></figure><hr>
<h2 id="setup-a-spring-boot-service">Setup a Spring Boot Service</h2><h3 id="create-the-initial-skeleton">Create the initial skeleton</h3><p>Using <a href="https://start.spring.io/?ref=honstain.com">https://start.spring.io/</a> to initialize a service.</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2022/02/image.png" class="kg-image" alt loading="lazy" width="989" height="891" srcset="https://honstain.com/content/images/size/w600/2022/02/image.png 600w, https://honstain.com/content/images/2022/02/image.png 989w" sizes="(min-width: 720px) 720px"></figure><p>The project will be created with a <code>HELP.md</code> that looks like this, with links to some of the relevant documentation. </p><pre><code class="language-text"># Getting Started

### Reference Documentation
For further reference, please consider the following sections:

* [Official Apache Maven documentation](https://maven.apache.org/guides/index.html)
* [Spring Boot Maven Plugin Reference Guide](https://docs.spring.io/spring-boot/docs/2.6.3/maven-plugin/reference/html/)
* [Create an OCI image](https://docs.spring.io/spring-boot/docs/2.6.3/maven-plugin/reference/html/#build-image)
* [Spring Data Redis (Access+Driver)](https://docs.spring.io/spring-boot/docs/2.6.3/reference/htmlsingle/#boot-features-redis)

### Guides
The following guides illustrate how to use some features concretely:

* [Messaging with Redis](https://spring.io/guides/gs/messaging-redis/)

</code></pre>
<p>Then import the application to IntelliJ</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2022/02/image-1.png" class="kg-image" alt loading="lazy" width="753" height="587" srcset="https://honstain.com/content/images/size/w600/2022/02/image-1.png 600w, https://honstain.com/content/images/2022/02/image-1.png 753w" sizes="(min-width: 720px) 720px"></figure><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2022/02/image-7.png" class="kg-image" alt loading="lazy" width="1299" height="480" srcset="https://honstain.com/content/images/size/w600/2022/02/image-7.png 600w, https://honstain.com/content/images/size/w1000/2022/02/image-7.png 1000w, https://honstain.com/content/images/2022/02/image-7.png 1299w" sizes="(min-width: 720px) 720px"></figure><p>So now we have the shell of our app, we will need to create a connection to Redis and subscribe to the stream.</p><h3 id="redis-connection">Redis Connection</h3><p>I will start with the connection factory, I have opted to use Lettuce for my connection management. </p><p>If you want some resources to review for Lettuce and Jedis:</p>
<ul>
<li><a href="https://redis.com/blog/jedis-vs-lettuce-an-exploration/?ref=honstain.com">https://redis.com/blog/jedis-vs-lettuce-an-exploration/</a></li>
<li><a href="https://docs.spring.io/spring-data/redis/docs/current/reference/html/?ref=honstain.com#reference">https://docs.spring.io/spring-data/redis/docs/current/reference/html/#reference</a></li>
<li><a href="https://lettuce.io/?ref=honstain.com">https://lettuce.io/</a></li>
<li><a href="https://github.com/redis/jedis?ref=honstain.com">https://github.com/redis/jedis</a></li>
</ul>
<p>ConnectionFactory.kt</p>
<pre><code class="language-kotlin">package com.example.StreamConsumerDemo2

import org.springframework.context.annotation.Bean
import org.springframework.stereotype.Component
import org.springframework.data.redis.connection.RedisStandaloneConfiguration
import org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory

@Configuration
class ConnectionFactory {

    @Bean
    fun redisConnectionFactory(): LettuceConnectionFactory {
        return LettuceConnectionFactory(
            RedisStandaloneConfiguration(&quot;localhost&quot;, 6379)
        )
    }
}
</code></pre>
<h3 id="redis-template-and-the-listener">Redis Template and the Listener</h3><p>This will be the code that processes each message, its also where we will execute the manual ack to Redis.</p><p>MyStreamListner.kt</p>
<pre><code class="language-kotlin">package com.example.StreamConsumerDemo2

import org.springframework.data.redis.connection.stream.MapRecord
import org.springframework.data.redis.core.StringRedisTemplate
import org.springframework.data.redis.stream.StreamListener
import org.springframework.stereotype.Component

@Component
class MyStreamListener(
    var redisTemplate: StringRedisTemplate
): StreamListener&lt;String, MapRecord&lt;String, String, String&gt;&gt; {

    override fun onMessage(message: MapRecord&lt;String, String, String&gt;) {
        println(&quot;id: ${message.id} stream: ${message.stream} value: ${message.value}&quot;)

        redisTemplate.opsForStream&lt;String, String&gt;().acknowledge(&quot;mygroup&quot;, message)
    }
}
</code></pre>
<h3 id="container-and-subscription">Container and Subscription</h3><p>Next, we need to create a container and a subscription on the StreamMessageListenerContainer that uses our Streams consumer group.</p><p>Some of the important things in the <code>StreamConsumer.kt</code> to consider:</p>
<ul>
<li><code>pollTimeout</code> on the StreamMessageListenerContainerOptions.
<ul>
<li>This is important because it controls how long the lettuce client will poll for (which can impact shutdown).</li>
</ul>
</li>
<li>How we construct our <code>Consumer</code>, as this is where we specify the group and consumer.
<ul>
<li>This is important because you would control the consumer names here if you ended up creating multiple instances of this service.</li>
<li>I have been very casual in this code with the configuration of the stream, group, consumer fields in an effort to minimize abstraction (and help the reader understand). You would want to organize this differently for production instead of for learning.</li>
</ul>
</li>
<li>How we construct StreamOffset.
<ul>
<li>This is important because it will determine how we start consuming from the stream when we start up.</li>
</ul>
</li>
</ul>
<p>StreamConsumer.kt</p>
<pre><code class="language-kotlin">package com.example.StreamConsumerDemo2

import org.springframework.data.redis.connection.RedisConnectionFactory
import org.springframework.data.redis.connection.stream.Consumer
import org.springframework.data.redis.connection.stream.MapRecord
import org.springframework.data.redis.connection.stream.ReadOffset
import org.springframework.data.redis.connection.stream.StreamOffset
import org.springframework.data.redis.stream.StreamMessageListenerContainer
import org.springframework.data.redis.stream.Subscription
import org.springframework.stereotype.Component
import java.time.Duration
import java.util.concurrent.TimeUnit
import javax.annotation.PreDestroy

@Component
class StreamConsumer(
    redisConnectionFactory: RedisConnectionFactory,
    streamListener: MyStreamListener,
) {

    final val POLL_TIMEOUT = 1000L

    final var container: StreamMessageListenerContainer&lt;String, MapRecord&lt;String, String, String&gt;&gt;
    final var subscription: Subscription

    init {
        val containerOptions = StreamMessageListenerContainer.StreamMessageListenerContainerOptions.builder()
            .pollTimeout(Duration.ofMillis(POLL_TIMEOUT))
            .build()
        container = StreamMessageListenerContainer.create(redisConnectionFactory, containerOptions)

        val consumer = Consumer.from(&quot;mygroup&quot;, &quot;Alice&quot;)
        subscription = container.receive(
            consumer,
            StreamOffset.create(&quot;mystream&quot;, ReadOffset.lastConsumed()),
            streamListener
        )
        container.start()
    }
}
</code></pre>
<h3 id="experiment-with-the-kotlin-service">Experiment with the Kotlin Service</h3><p>This will give you a service that can consume from the Redis Stream, ack messages, and print their content.</p><p>Start your service using IntelliJ Run/Debug Configurations</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2022/02/image-18.png" class="kg-image" alt loading="lazy" width="1424" height="1023" srcset="https://honstain.com/content/images/size/w600/2022/02/image-18.png 600w, https://honstain.com/content/images/size/w1000/2022/02/image-18.png 1000w, https://honstain.com/content/images/2022/02/image-18.png 1424w" sizes="(min-width: 720px) 720px"></figure><p>If you want to use the command line instead (note - you need maven):</p>
<pre><code class="language-bash"># To Construct the artifact.
mvn clean package

# To run the JAR we created in the previous mvn package.
java -jar target/StreamConsumerDemo2-0.0.1-SNAPSHOT.jar 
</code></pre>
<p>Which should give you something like this.</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2022/02/image-20.png" class="kg-image" alt loading="lazy" width="1328" height="281" srcset="https://honstain.com/content/images/size/w600/2022/02/image-20.png 600w, https://honstain.com/content/images/size/w1000/2022/02/image-20.png 1000w, https://honstain.com/content/images/2022/02/image-20.png 1328w" sizes="(min-width: 720px) 720px"></figure><p>By publishing some messages to the stream using the redis-cli (running in a docker container), you can see your service start consuming them. There are many ways t achieve this, I have chosen this method in an effort to leverage tools the reader already has (we already stood up Redis in a docker container in the first part of this tutorial).  </p><pre><code class="language-bash">&#x276F; docker ps --format &apos;{{.Names}}&apos;
streams_redisinsight_1
streams_redis_1

&#x276F; docker exec -it streams_redis_1 sh

/data # redis-cli

127.0.0.1:6379&gt; XADD mystream * sensor-id 1234 temperature 15.1
&quot;1644856873947-0&quot;
127.0.0.1:6379&gt; XADD mystream * sensor-id 1234 temperature 15.0
&quot;1644856875587-0&quot;
127.0.0.1:6379&gt; XADD mystream * sensor-id 1234 temperature 15.4
&quot;1644856876818-0&quot;
</code></pre>
<figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2022/02/image-21.png" class="kg-image" alt loading="lazy" width="1232" height="223" srcset="https://honstain.com/content/images/size/w600/2022/02/image-21.png 600w, https://honstain.com/content/images/size/w1000/2022/02/image-21.png 1000w, https://honstain.com/content/images/2022/02/image-21.png 1232w" sizes="(min-width: 720px) 720px"></figure><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2022/02/image-19.png" class="kg-image" alt loading="lazy" width="1311" height="331" srcset="https://honstain.com/content/images/size/w600/2022/02/image-19.png 600w, https://honstain.com/content/images/size/w1000/2022/02/image-19.png 1000w, https://honstain.com/content/images/2022/02/image-19.png 1311w" sizes="(min-width: 720px) 720px"></figure><p></p><hr>
<h2 id="spring-boot-shutdown-problem">Spring Boot Shutdown Problem</h2><h3 id="the-connection-is-already-closed-connection-reset-by-peer">The Connection is already closed / Connection reset by peer</h3><p>Occasionally when I shut down the service I get the following error (it does not happen every time).</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2022/02/image-17.png" class="kg-image" alt loading="lazy" width="1704" height="1115" srcset="https://honstain.com/content/images/size/w600/2022/02/image-17.png 600w, https://honstain.com/content/images/size/w1000/2022/02/image-17.png 1000w, https://honstain.com/content/images/size/w1600/2022/02/image-17.png 1600w, https://honstain.com/content/images/2022/02/image-17.png 1704w" sizes="(min-width: 720px) 720px"></figure><p>The critical parts of the error:</p>
<ul>
<li><code>[cTaskExecutor-1] io.lettuce.core.RedisChannelHandler      : Connection is already closed</code></li>
<li><code>[cTaskExecutor-1] ageListenerContainer$LoggingErrorHandler : Unexpected error occurred in scheduled task.</code></li>
<li><code>org.springframework.data.redis.RedisSystemException: Redis exception; nested exception is io.lettuce.core.RedisException: Connection closed</code></li>
</ul>
<div class="kg-card kg-file-card"><a class="kg-file-card-container" href="https://honstain.com/content/files/2022/02/unexpected_error.txt" title="Download" download><div class="kg-file-card-contents"><div class="kg-file-card-title">Unexpected error</div><div class="kg-file-card-caption"></div><div class="kg-file-card-metadata"><div class="kg-file-card-filename">unexpected_error.txt</div><div class="kg-file-card-filesize">9 KB</div></div></div><div class="kg-file-card-icon"><svg viewbox="0 0 24 24"><defs><style>.a{fill:none;stroke:currentColor;stroke-linecap:round;stroke-linejoin:round;stroke-width:1.5px;}</style></defs><title>download-circle</title><polyline class="a" points="8.25 14.25 12 18 15.75 14.25"/><line class="a" x1="12" y1="6.75" x2="12" y2="18"/><circle class="a" cx="12" cy="12" r="11.25"/></svg></div></a></div><p>I also saw error/exception messages like:</p>
<ul>
<li><code>[cTaskExecutor-1] ageListenerContainer$LoggingErrorHandler : Unexpected error occurred in scheduled task.</code></li>
<li><code>org.springframework.data.redis.RedisConnectionFailureException: Unable to connect to Redis; nested exception is io.lettuce.core.RedisConnectionException: Unable to connect to localhost:6379</code></li>
<li><code>Caused by: java.io.IOException: Connection reset by peer</code></li>
<li><code>[oundedElastic-1] o.s.b.a.r.RedisReactiveHealthIndicator   : Redis health check failed</code></li>
</ul>
<h3 id="how-i-made-progress-investigating-the-error">How I Made Progress Investigating the Error</h3><p>While looking through the code associated with the exception, I gravitated towards <code>StreamPollTask</code> which looks to be marshaled by <code>StreamMessageListenerContainer</code> and found this spring-data-redis github Issue <a href="https://github.com/spring-projects/spring-data-redis/issues/2246?ref=honstain.com">https://github.com/spring-projects/spring-data-redis/issues/2246</a> where a spring-data-redis contributor suggests that the <code>StreamMessageListenerContainer</code> needs to be shutdown before the application.</p>
<p>This lead me to look at two classes in more detail:</p>
<ul>
<li><code>StreamMessageListenerContainer</code> It defines the redis calls (in a Java <code>Function</code>), orchestrates the subscriptions, and creates <code>StreamPollTask</code>.</li>
<li><code>StreamPollTask</code> within this is a few interesting structures, there is a CountDownLatch and a <code>doLoop()</code> function.</li>
</ul>
<p>These are from the package <code>org.springframework.data.redis.stream</code> <a href="https://github.com/spring-projects/spring-data-redis?ref=honstain.com">https://github.com/spring-projects/spring-data-redis</a></p>
<p>The next big step for me came when I started experimenting with waiting on the <code>Subscription</code> object via <code>subscription.isActive</code> combined with modifying the <code>StreamMessageListenerContainerOptions</code> and <code>pollTimeout</code>.</p>
<p>I found that the longer my <code>pollTimeout</code> was configured on the <code>StreamMessageListenerContainer</code>, the longer it would take for <code>subscription.isActive</code> to be false. Note that <code>pollTimeout</code> has a default of 2 seconds.<br>
I struggled here because I assumed the must be a better way to wait for the blocking call to Redis to complete (the one involved with the <code>pollTimeout</code>).</p>
<ul>
<li>docs.spring.io pollTimeout <a href="https://docs.spring.io/spring-data/redis/docs/2.2.6.RELEASE/api/org/springframework/data/redis/stream/StreamMessageListenerContainer.StreamMessageListenerContainerOptionsBuilder.html?ref=honstain.com#pollTimeout-java.time.Duration-">https://docs.spring.io/spring-data/redis/docs/2.2.6.RELEASE/api/org/springframework/data/redis/stream/StreamMessageListenerContainer.StreamMessageListenerContainerOptionsBuilder.html#pollTimeout-java.time.Duration-</a></li>
<li>docs.spring.io and the StreamReadOptions <a href="https://docs.spring.io/spring-data/redis/docs/2.2.0.M4/api/index.html?org%2Fspringframework%2Fdata%2Fredis%2Fconnection%2Fstream%2FStreamReadOptions.html=&amp;ref=honstain.com">https://docs.spring.io/spring-data/redis/docs/2.2.0.M4/api/index.html?org/springframework/data/redis/connection/stream/StreamReadOptions.html</a></li>
</ul>
<h3 id="workaround-for-the-issue">WorkAround for the Issue</h3><p>This resulted in a modification of our StreamConsumer.kt class:</p>
<ul>
<li>The important piece this is the <code>@PreDestroy</code> annotation where we block and poll waiting for the subscription to finally stop.</li>
</ul>
<pre><code class="language-kotlin">package com.example.StreamConsumerDemo2

import org.springframework.data.redis.connection.RedisConnectionFactory
import org.springframework.data.redis.connection.stream.Consumer
import org.springframework.data.redis.connection.stream.MapRecord
import org.springframework.data.redis.connection.stream.ReadOffset
import org.springframework.data.redis.connection.stream.StreamOffset
import org.springframework.data.redis.stream.StreamMessageListenerContainer
import org.springframework.data.redis.stream.Subscription
import org.springframework.stereotype.Component
import java.time.Duration
import java.util.concurrent.TimeUnit
import javax.annotation.PreDestroy

@Component
class StreamConsumer(
    redisConnectionFactory: RedisConnectionFactory,
    streamListener: MyStreamListener,
) {

    final val POLL_TIMEOUT = 1000L

    final var container: StreamMessageListenerContainer&lt;String, MapRecord&lt;String, String, String&gt;&gt;
    final var subscription: Subscription

    init {
        val containerOptions = StreamMessageListenerContainer.StreamMessageListenerContainerOptions.builder()
            .pollTimeout(Duration.ofMillis(POLL_TIMEOUT))
            .build()
        container = StreamMessageListenerContainer.create(redisConnectionFactory, containerOptions)

        val consumer = Consumer.from(&quot;mygroup&quot;, &quot;Alice&quot;)
        subscription = container.receive(
            consumer,
            StreamOffset.create(&quot;mystream&quot;, ReadOffset.lastConsumed()),
            streamListener
        )
        container.start()
    }

    @PreDestroy
    fun preDestroy() {
        println(&quot;PreDestroy subscription - subscription?.isActive: ${subscription.isActive}&quot;)

        // Timing how long it takes https://stackoverflow.com/questions/1770010/how-do-i-measure-time-elapsed-in-java
        val startTime = System.nanoTime()

        // Using container.stop() since it already calls subscription.cancel()
        container.stop()
        //subscription.cancel()

        while (subscription.isActive) {
            //println(&quot;wait... 10ms&quot;)
            Thread.sleep(10)
        }

        val completionTime = TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - startTime)
        println(&quot;Time required for subscription.isActive==false : $completionTime ms&quot;)
        println(&quot;PreDestroy subscription - subscription?.isActive: ${subscription.isActive}&quot;)
    }
}
</code></pre>
<hr><h2 id="summary">Summary</h2><p>I think my solution leaves a lot to be desired, and I suspect my lack of familiarity with Spring Boot is probably evident. I have spent some time researching and trying to experiment with different shutdown strategies, but nothing has come to light that improves what I have above.</p><p>You can find the source code here:</p>
<ul>
<li>bitbucket <a href="https://bitbucket.org/honstain/redis-stream-prototype-v2/src/master/?ref=honstain.com">https://bitbucket.org/honstain/redis-stream-prototype-v2/src/master/</a></li>
<li>github <a href="https://github.com/AnthonyHonstain/Redis-Stream-Prototype-V2?ref=honstain.com">https://github.com/AnthonyHonstain/Redis-Stream-Prototype-V2</a></li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Scalatra for Double Record Accounting]]></title><description><![CDATA[<p></p><p>In this post, we will explore an alternative database schema design for tracking physical inventory. Our <a href="https://honstain.com/inventory-transfer-row-locking/">previous posts</a> focused on using a single database record to model a distinct location and SKU (location and SKU being represented as basic strings in this example). But what would it look like if</p>]]></description><link>https://honstain.com/scalatra-and-slick-for-double-2/</link><guid isPermaLink="false">65b52aaf7a5d430e36b8ec88</guid><category><![CDATA[Scala]]></category><category><![CDATA[Scalatra]]></category><category><![CDATA[Slick]]></category><category><![CDATA[PostgreSQL]]></category><dc:creator><![CDATA[Anthony Honstain]]></dc:creator><pubDate>Sun, 26 May 2019 17:59:43 GMT</pubDate><media:content url="https://honstain.com/content/images/2019/05/scala_double_entry_create_inventory.JPG" medium="image"/><content:encoded><![CDATA[<img src="https://honstain.com/content/images/2019/05/scala_double_entry_create_inventory.JPG" alt="Scalatra for Double Record Accounting"><p></p><p>In this post, we will explore an alternative database schema design for tracking physical inventory. Our <a href="https://honstain.com/inventory-transfer-row-locking/">previous posts</a> focused on using a single database record to model a distinct location and SKU (location and SKU being represented as basic strings in this example). But what would it look like if we designed our schema around the idea of double-entry accounting? That would mean the quantity of a given location and SKU would be defined by the sum of multiple database records (we couldn&apos;t just look at a single record anymore to find its current quantity).</p><p>A good primer for double-entry accounting is Martin Fowler&apos;s blog post on the subject <a href="https://www.martinfowler.com/eaaDev/AccountingNarrative.html?ref=honstain.com">https://www.martinfowler.com/eaaDev/AccountingNarrative.html</a>.</p><h3 id="previous-blog-posts-in-this-series">Previous Blog Posts In This Series</h3><p>This post will significantly reference several of my previous blog posts. We now seek to draw a comparison on this double-entry schema to a design that uses a single record (for each location and SKU relationship). I would venture to guess that most developers would not start with a double-entry model.</p><ul><li>Part 1 - <a href="https://honstain.com/scalatra-inventory-management-service/">Creating a Scalatra Inventory Management Service</a></li><li>Part 2 - <a href="https://honstain.com/slick-upsert-and-select/">Implementing Create/Update in Slick</a></li><li>Part 3 - <a href="https://honstain.com/inventory-management-transfer-start/">Inventory Management Transfer</a></li><li>Part 4 - <a href="https://honstain.com/inventory-transfer-row-locking/">Inventory Management Transfer with Row Level Locking</a></li></ul><h3 id="source-code">Source Code</h3><ul><li>A basic skeleton of a Scalatra service you could use to build as you go while reading: <a href="https://bitbucket.org/honstain/scalatra-single-record-transfer-service/?ref=honstain.com">https://bitbucket.org/honstain/scalatra-single-record-transfer-service/</a> You would need to create a new DAO and wire up the REST endpoints and tests.</li><li>The completed solution: <a href="https://bitbucket.org/honstain/scalatra-double-record-transfer-service/src/master/?ref=honstain.com">https://bitbucket.org/honstain/scalatra-double-record-transfer-service/src/master/</a></li></ul><h2 id="designing-the-schema-for-double-entry">Designing the Schema for Double-Entry</h2><p>Each transfer (physical movement of goods from one location to another) will be modeled with two new records, one decrementing the qty from the source location and one incrementing the qty for the destination.</p><p>Starting with a single location <code>LOC-01</code> that contains 2 units of <code>SKU-01</code></p>
<table>
<thead>
<tr>
<th>id</th>
<th>location</th>
<th>sku</th>
<th>qty</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>LOC-01</td>
<td>SKU-01</td>
<td>2</td>
</tr>
</tbody>
</table>
<p>If we wanted to move 1 unit of <code>SKU-01</code> to the location <code>LOC-02</code>, instead of updating record id:1 we would create two new records.</p>
<table>
<thead>
<tr>
<th>id</th>
<th>location</th>
<th>sku</th>
<th>qty</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>LOC-01</td>
<td>SKU-01</td>
<td>2</td>
</tr>
<tr>
<td>2</td>
<td>LOC-01</td>
<td>SKU-01</td>
<td>-1</td>
</tr>
<tr>
<td>3</td>
<td>LOC-02</td>
<td>SKU-01</td>
<td>1</td>
</tr>
</tbody>
</table>
<p>This means that to calculate the current quantity of <code>LOC-01</code> and <code>SKU-01</code> we would need to sum the quantity across records id:1 and id:2.</p>
<p><strong>Using this design, we are never updating old database records, we are only creating new ones.</strong></p>
<p>All our operations (creating, moving, and decrementing inventory) would be handled by a DB insert. Because we treat existing records as immutable, there are fewer instances where the DB needs to manage the overhead of locking specific rows. We saw in our previous post how we ended up doing some fine-grained row-level locking in order to provide consistency. We ended up locking everything in the transaction, the source and the destination, which can have an impact on all the other database queries that might be running against that table.</p><p>Just like in our <a href="https://honstain.com/scalatra-inventory-management-service/">original example</a>, let&apos;s start with a DAO and way to retrieve all the database records for testing.</p><pre><code class="language-scala">package org.bitbucket.honstain.inventory.dao

import org.slf4j.{Logger, LoggerFactory}
import slick.jdbc.{PostgresProfile, TransactionIsolation}
import slick.jdbc.PostgresProfile.api._

import scala.concurrent.Future
import scala.concurrent.ExecutionContext.Implicits.global

object TRANSACTION {
  val ADJUST = &quot;adjust&quot;
  val TRANSFER = &quot;transfer&quot;
}

case class InventoryDoubleRecord(
                                  id: Option[Int],
                                  sku: String,
                                  qty: Int,
                                  txnType: String,
                                  location: String
                                )

class InventoryDoubleRecords(tag: Tag) extends Table[InventoryDoubleRecord](tag, &quot;inventory_double&quot;) {
  def id = column[Int](&quot;id&quot;, O.PrimaryKey, O.AutoInc)
  def sku = column[String](&quot;sku&quot;)
  def qty = column[Int](&quot;qty&quot;)
  def txnType = column[String](&quot;type&quot;)
  def location = column[String](&quot;location&quot;)
  def * =
    (id.?, sku, qty, txnType, location) &lt;&gt; (InventoryDoubleRecord.tupled, InventoryDoubleRecord.unapply)
}

object InventoryDoubleRecordDao extends TableQuery(new InventoryDoubleRecords(_)) {

  val logger: Logger = LoggerFactory.getLogger(getClass)

  def findAll(db: PostgresProfile.backend.DatabaseDef): Future[Seq[InventoryDoubleRecord]] = {
    db.run(this.result)
  }

  def findAll(db: PostgresProfile.backend.DatabaseDef): Future[Seq[(String, String, Option[Int])]] = {
    val groupByQuery = this.groupBy(x =&gt; (x.sku, x.location))
      .map{ case ((sku, location), group) =&gt; (sku, location, group.map(_.qty).sum) }
      .result
    db.run(groupByQuery)
  }
}
</code></pre>
<p>We will also make a basic test to start us off.</p><pre><code class="language-scala">package org.bitbucket.honstain.inventory.dao

import org.bitbucket.honstain.PostgresSpec

import org.scalatest.BeforeAndAfter
import org.scalatra.test.scalatest._
import slick.dbio.DBIO
import slick.jdbc.PostgresProfile.api._

import scala.concurrent.Await
import scala.concurrent.duration.Duration


class InventoryDoubleRecordDaoTests extends ScalatraFunSuite with BeforeAndAfter with PostgresSpec {

  def createInventoryTable: DBIO[Int] =
    sqlu&quot;&quot;&quot;
          CREATE TABLE inventory_double
          (
            id bigserial NOT NULL,
            sku text,
            qty integer,
            type text,
            location text,
            CONSTRAINT pk_double PRIMARY KEY (id)
          );
      &quot;&quot;&quot;
  def dropInventoryTable: DBIO[Int] =
    sqlu&quot;&quot;&quot;
          DROP TABLE IF EXISTS inventory_double;
      &quot;&quot;&quot;

  before {
    Await.result(database.run(createInventoryTable), Duration.Inf)
  }

  after {
    Await.result(database.run(dropInventoryTable), Duration.Inf)
  }

  val TEST_SKU = &quot;NewSku&quot;
  val BIN_01 = &quot;Bin-01&quot;
  val BIN_02 = &quot;Bin-02&quot;

  test(&quot;findAll when empty&quot;) {
    val futureFind = InventoryDoubleRecordDao.findAll(database)
    val findResult: Seq[InventoryDoubleRecord] = Await.result(futureFind, Duration.Inf)

    findResult should equal(List())
  }
  
    test(&quot;findAll with single location and SKU but multiple records&quot;) {
    val inventoryTable = TableQuery[InventoryDoubleRecords] ++= Seq(
      InventoryDoubleRecord(None, TEST_SKU, 1, TRANSACTION.ADJUST, BIN_01),
      InventoryDoubleRecord(None, TEST_SKU, 3, TRANSACTION.ADJUST, BIN_01),
      InventoryDoubleRecord(None, TEST_SKU, -1, TRANSACTION.ADJUST, BIN_01)
    )
    Await.result(database.run(inventoryTable), Duration.Inf)

    val futureFind = InventoryDoubleRecordDao.findAll(database)
    val findResult: Seq[(String, String, Option[Int])] = Await.result(futureFind, Duration.Inf)

    findResult should equal(List((TEST_SKU, BIN_01, Some(3))))
  }

  test(&quot;findAll with multiple location+SKU and multiple records&quot;) {
    val inventoryTable = TableQuery[InventoryDoubleRecords] ++= Seq(
      InventoryDoubleRecord(None, TEST_SKU, 1, TRANSACTION.ADJUST, BIN_01),
      InventoryDoubleRecord(None, TEST_SKU, 3, TRANSACTION.ADJUST, BIN_02),
      InventoryDoubleRecord(None, TEST_SKU, -1, TRANSACTION.ADJUST, BIN_01)
    )
    Await.result(database.run(inventoryTable), Duration.Inf)

    val futureFind = InventoryDoubleRecordDao.findAll(database)
    val findResult: Seq[(String, String, Option[Int])] = Await.result(futureFind, Duration.Inf)

    findResult should contain only ((TEST_SKU, BIN_02, Some(3)), (TEST_SKU, BIN_01, Some(0)))
  }
}
</code></pre>
<p>This should look very similar to our <a href="https://honstain.com/scalatra-inventory-management-service/">original example</a> (that modeled each location and SKU relationship with a single DB record), except for the fact that we now do a SQL aggregation on the results to find all the current inventory information.</p><pre><code class="language-scala">  def findAll(db: PostgresProfile.backend.DatabaseDef): Future[Seq[(String, String, Option[Int])]] = {
    val groupByQuery = this.groupBy(x =&gt; (x.sku, x.location))
      .map{ case ((sku, location), group) =&gt; (sku, location, group.map(_.qty).sum) }
      .result
    db.run(groupByQuery)
  }
</code></pre>
<p>This Slick query is roughly equivalent to the following SQL:</p><pre><code class="language-SQL">SELECT sku, location, SUM(qty)
FROM inventory_double
GROUP BY sku, location
</code></pre>
<p><strong>Why the SQL aggregation and tuple return type?</strong> How we model the record in the database with the class <code>InventoryDoubleRecord</code> is now somewhat abstracted from how our business logic might want to handle the date. The <code>InventoryDoubleRecord</code> class has <code>id</code> and <code>txnType</code> columns which are not necessarily applicable when a client is asking the service how many items are in a location or how many units of a SKU are available. Hence we are returning a tuple from our <code>findAll</code> function, the reader could map this to a new class if they preferred.</p><h2 id="createinsertupdate-logic">Create/Insert/Update Logic</h2><p>Now we want the ability to create new inventory, and update the qty of an existing location + SKU. You might refer to this sort of logic as an adjustment (which I track via the TRANSACTION enum). Where <strong>previously </strong>we could take a lock on a single record and do an atomic create or update:</p><pre><code class="language-sql">INSERT INTO inventory_single (sku, qty, location)
VALUES (&apos;SKU-01&apos;, 3, &apos;LOC-01&apos;)
ON CONFLICT ON CONSTRAINT inventory_single_sku_location_key
DO UPDATE SET qty = EXCLUDED.qty;
</code></pre>
<p>Now we have to consider multiple rows in order to find out the current quantity of a location and SKU. Our initial function definition could look like:</p><pre><code class="language-scala">  def create(db: PostgresProfile.backend.DatabaseDef,
             sku: String,
             qty: Int,
             location: String
            ): Future[InventoryDoubleRecord] = {
    val insert = for {
      // Find the current count for this Location and SKU
      initGroupBy &lt;- // TODO - query to find existing qty

      // Insert a new record that will result in the desired qty
      _ &lt;- // TODO - insert a new record

      // Return the updated value
      newRecordBySku &lt;- // TODO return the current qty for the SKU and Location
    } yield newRecordBySku

    db.run(insert.transactionally)
  }
</code></pre>
<p>The structure we just outlined uses a Slick transaction and for comprehension to group a set of queries (lookup, insert, and lookup). We start first by doing the lookup component.</p><ul>
<li><strong>Lookup existing qty</strong> - find the qty for the Location/SKU pair if it exists
<ul>
<li>A basic Slick aggregartion with sum, will give us a result of something like <code>Future[Seq[(String, String, Option[Int])]]</code>.  We should be dealing with just a single record here given our schema design so we can use <code>headOption</code> to grab the head of the Sequence.</li>
</ul>
</li>
</ul>
<pre><code class="language-scala">this.filter(_.sku === sku).filter(_.location === location)
        .groupBy(x =&gt; x.sku)
        .map{ case (_, group) =&gt; (sku, group.map(_.qty).sum) }
        .result.headOption
</code></pre>
<br>
<ul>
<li><strong>Insert</strong> - create a new database record.
<ul>
<li>Case 1 - We found no existing records for the SKU and Location pair.</li>
<li>Case 2 - We have an existing record and the QTY is needed to determine what the new qty to insert will be.</li>
</ul>
</li>
</ul>
<pre><code class="language-scala">      _ &lt;- {
        initGroupBy match {
          case None =&gt;
            this += InventoryDoubleRecord(Some(0), sku, qty, TRANSACTION.ADJUST, location)
          case Some((_, Some(existingQty))) =&gt;
            logger.debug(s&quot;FOUND record $qty - $existingQty&quot;)
            this += InventoryDoubleRecord(Some(0), sku, qty - existingQty, TRANSACTION.ADJUST, location)
          case _ =&gt; DBIO.failed(new Exception(&quot;Insert for create failed&quot;))
        }
      }
</code></pre>
<br>
<ul>
<li><strong>Lookup current quantity</strong> - get the current qty for the Location/SKU pair after our insert.
<ul>
<li>I made an effort to use the return type <code>InventoryDoubleRecord</code> class so we could more easily draw a parallel with our single record schema design. It also illustrates an example of mapping the results (even I am not personally a fan of this code).</li>
</ul>
</li>
</ul>
<pre><code class="language-scala">      newRecordGroupBy &lt;- this.filter(_.sku === sku).filter(_.location === location)
        .groupBy(x =&gt; x.sku).map{ case (_, group) =&gt; (sku, group.map(_.qty).sum) }
        .result.headOption
      newRecordBySku &lt;- {
        newRecordGroupBy match {
          case Some((_, Some(newQty))) =&gt;
            DBIO.successful (InventoryDoubleRecord (None, sku, newQty, TRANSACTION.ADJUST, location) )
          case _ =&gt;
            DBIO.failed(new Exception(&quot;Insert for create failed&quot;))
        }
      }
</code></pre>
<br><p>We want some tests to cover this logic as well. These are not beautiful or fully comprehensive tests, but hopefully they are illustrative of the of logic we are testing in the DAO. They are a bit on the verbose side.</p><pre><code class="language-scala">  test(&quot;create when empty&quot;) {
    val futureCreate = InventoryDoubleRecordDao.create(database, TEST_SKU, 2, BIN_01)
    val createResult = Await.result(futureCreate, Duration.Inf)
    createResult should equal(InventoryDoubleRecord(None, TEST_SKU, 2, TRANSACTION.ADJUST, BIN_01))

    val futureFind = InventoryDoubleRecordDao.findAllRaw(database)
    val findResult: Seq[InventoryDoubleRecord] = Await.result(futureFind, Duration.Inf)
    findResult should equal(List(InventoryDoubleRecord(Option(1), TEST_SKU, 2, TRANSACTION.ADJUST, BIN_01)))
  }

  test(&quot;create with existing record&quot;) {
    Await.result(InventoryDoubleRecordDao.create(database, TEST_SKU, 2, BIN_01), Duration.Inf)

    val futureUpdate = InventoryDoubleRecordDao.create(database, TEST_SKU, 1, BIN_01)
    val updateResult= Await.result(futureUpdate, Duration.Inf)
    updateResult should equal(InventoryDoubleRecord(None, TEST_SKU, 1, TRANSACTION.ADJUST, BIN_01))

    val futureFindAllRaw = InventoryDoubleRecordDao.findAllRaw(database)
    val findResult: Seq[InventoryDoubleRecord] = Await.result(futureFindAllRaw, Duration.Inf)
    findResult should equal(List(
      InventoryDoubleRecord(Option(1), TEST_SKU, 2, TRANSACTION.ADJUST, BIN_01),
      InventoryDoubleRecord(Option(2), TEST_SKU, -1, TRANSACTION.ADJUST, BIN_01),
    ))
  }
</code></pre>
<h3 id="final-createinsertupdate-code">Final Create/Insert/Update Code</h3><p>This gives us the following create function:</p><pre><code class="language-scala">  def create(db: PostgresProfile.backend.DatabaseDef,
             sku: String,
             qty: Int,
             location: String
            ): Future[InventoryDoubleRecord] = {
    val insert = for {
      // Find the current count for this location and SKU
      initGroupBy &lt;- this.filter(_.sku === sku).filter(_.location === location)
        .groupBy(x =&gt; x.sku).map{ case (_, group) =&gt; (sku, group.map(_.qty).sum) }
        .result.headOption

      // Insert a new record that will result in the desired qty
      _ &lt;- {
        initGroupBy match {
          case None =&gt;
            this += InventoryDoubleRecord(None, sku, qty, TRANSACTION.ADJUST, location)
          case Some((_, Some(existingQty))) =&gt;
            logger.debug(s&quot;FOUND record $qty - $existingQty&quot;)
            this += InventoryDoubleRecord(None, sku, qty - existingQty, TRANSACTION.ADJUST, location)
          case _ =&gt; DBIO.failed(new Exception(&quot;Insert for create failed&quot;))
        }
      }

      // Return the updated value
      newRecordGroupBy &lt;- this.filter(_.sku === sku).filter(_.location === location)
        .groupBy(x =&gt; x.sku).map{ case (_, group) =&gt; (sku, group.map(_.qty).sum) }
        .result.headOption
      newRecordBySku &lt;- {
        newRecordGroupBy match {
          case Some((_, Some(newQty))) =&gt;
            DBIO.successful (InventoryDoubleRecord (None, sku, newQty, TRANSACTION.ADJUST, location) )
          case _ =&gt;
            DBIO.failed(new Exception(&quot;Insert for create failed&quot;))
        }
      }
    } yield newRecordBySku

    db.run(insert.transactionally)
  }
</code></pre>
<p>The reader may notice that this query is susceptible to problems with consistency when we have concurrent calls. We have not set an isolation level for our transaction, but just like our problems with inventory transfer <a href="https://honstain.com/inventory-transfer-row-locking/">http://honstain.com/inventory-transfer-row-locking/</a> we can get undesirable behavior if multiple requests to create for a given location/SKU overlap.</p><h3 id="demonstrate-the-consistency-problem">Demonstrate the Consistency Problem</h3><p>If you use siege to pound on the service with a few users you can pretty quickly observe some undesirable behavior.</p><pre><code class="language-text">siege -v -c2 -r10 --content-type &quot;application/json&quot; -f siege_urls.txt

#### siege_urls.text ####
127.0.0.1:8080/double
127.0.0.1:8080/double POST {&quot;sku&quot;: &quot;SKU-01&quot;,&quot;qty&quot;: 1,&quot;location&quot;: &quot;LOC-01&quot;}
127.0.0.1:8080/double POST {&quot;sku&quot;: &quot;SKU-01&quot;,&quot;qty&quot;: 17,&quot;location&quot;: &quot;LOC-01&quot;}
</code></pre>
<p>We would hope to only ever see a quantity of 1 or 17 (with corresponding adjustment records in the DB).</p><table>
<thead>
<tr>
<th>id</th>
<th>sku</th>
<th>qty</th>
<th>type</th>
<th>location</th>
</tr>
</thead>
<tbody>
<tr>
<td>448</td>
<td>SKU-01</td>
<td>1</td>
<td>adjust</td>
<td>LOC-01</td>
</tr>
<tr>
<td>449</td>
<td>SKU-01</td>
<td>1</td>
<td>adjust</td>
<td>LOC-01</td>
</tr>
<tr>
<td>450</td>
<td>SKU-01</td>
<td>15</td>
<td>adjust</td>
<td>LOC-01</td>
</tr>
<tr>
<td>451</td>
<td>SKU-01</td>
<td>15</td>
<td>adjust</td>
<td>LOC-01</td>
</tr>
<tr>
<td>452</td>
<td>SKU-01</td>
<td>-31</td>
<td>adjust</td>
<td>LOC-01</td>
</tr>
<tr>
<td>453</td>
<td>SKU-01</td>
<td>0</td>
<td>adjust</td>
<td>LOC-01</td>
</tr>
</tbody>
</table>
<pre><code class="language-text">#### Example Logging from Sacalatra Service ####
10:41:52.365 [scala-execution-context-global-35] DEBUG o.b.h.i.d.InventoryDoubleRecordDao$ - INSERT need qty: 17 existing: 2 for sku: SKU-01 location: LOC-01
10:41:52.372 [scala-execution-context-global-35] DEBUG o.b.h.i.d.InventoryDoubleRecordDao$ - INSERT need qty: 17 existing: 2 for sku: SKU-01 location: LOC-01
10:41:52.387 [qtp1637506559-12] DEBUG o.b.h.inventory.app.ToyInventory - GET: location: SKU-01 sku: LOC-01 qty: Some(17)
10:41:56.813 [qtp1637506559-17] DEBUG o.b.h.inventory.app.ToyInventory - GET: location: SKU-01 sku: LOC-01 qty: Some(32)
10:41:56.818 [scala-execution-context-global-39] DEBUG o.b.h.i.d.InventoryDoubleRecordDao$ - INSERT need qty: 1 existing: 32 for sku: SKU-01 location: LOC-01
10:41:56.827 [scala-execution-context-global-39] DEBUG o.b.h.i.d.InventoryDoubleRecordDao$ - INSERT need qty: 1 existing: 1 for sku: SKU-01 location: LOC-01
10:41:56.839 [scala-execution-context-global-38] DEBUG o.b.h.i.d.InventoryDoubleRecordDao$ - INSERT need qty: 17 existing: 1 for sku: SKU-01 location: LOC-01
10:41:56.862 [scala-execution-context-global-38] DEBUG o.b.h.i.d.InventoryDoubleRecordDao$ - INSERT need qty: 17 existing: 17 for sku: SKU-01 location: LOC-01
</code></pre>
<p>I would encourage you to play with several different isolation levels and observe how PostgreSQL handles this sort of query (read following by an insert).</p><pre><code class="language-scala">db.run(insert.transactionally.withTransactionIsolation(TransactionIsolation.Serializable))
</code></pre>
<h3 id="summary-of-the-initial-createinsertupdate-logic">Summary of the Initial Create/Insert/Update Logic</h3><p>We have created a basic design for tracking inventory using a DB schema based on double-entry accounting. It still has some gaps with consistency (just like our initial attempt that relied on a single DB record for each location and SKU pair), but hopefully this gives you some ideas.</p><p>Source code for this blog post: <a href="https://bitbucket.org/honstain/scalatra-double-record-transfer-service/src/master/?ref=honstain.com">https://bitbucket.org/honstain/scalatra-double-record-transfer-service/src/master/</a></p><h3 id="increasing-consistency-of-the-createinsertupdate-logic">Increasing Consistency of the Create/Insert/Update Logic </h3><p>One idea we can explore is taking a pessimistic lock on the location and SKU. What would that imply in our current schema?</p><ul><li>Using a <code>SELECT FOR UPDATE</code> would mean that we lock all the records needed to compute the current value, this would be worse (in terms of DB overhead to support locking) than our schema that used a single record to track the quantity.</li></ul><table>
<thead>
<tr>
<th>id</th>
<th>location</th>
<th>sku</th>
<th>qty</th>
<th>type</th>
</tr>
</thead>
<tbody>
<tr>
<td>474</td>
<td>LOC-01</td>
<td>SKU-01</td>
<td>1</td>
<td>adjust</td>
</tr>
<tr>
<td>475</td>
<td>LOC-01</td>
<td>SKU-01</td>
<td>1</td>
<td>adjust</td>
</tr>
<tr>
<td>476</td>
<td>LOC-01</td>
<td>SKU-01</td>
<td>1</td>
<td>adjust</td>
</tr>
<tr>
<td>477</td>
<td>LOC-01</td>
<td>SKU-01</td>
<td>1</td>
<td>adjust</td>
</tr>
<tr>
<td>478</td>
<td>LOC-01</td>
<td>SKU-01</td>
<td>-3</td>
<td>adjust</td>
</tr>
<tr>
<td>479</td>
<td>LOC-01</td>
<td>SKU-01</td>
<td>0</td>
<td>adjust</td>
</tr>
</tbody>
</table>
<p>We will instead create a new table just to support this locking behavior.</p><pre><code class="language-sql">CREATE TABLE inventory_lock
(
  location text,
  sku text,
  revision integer,
  CONSTRAINT pk_lock PRIMARY KEY (location, sku)
);
</code></pre>
<pre><code class="language-scala">class InventoryDoubleRecordLocks(tag: Tag) extends Table[(String, String, Int)](tag, &quot;inventory_lock&quot;) {
  def location = column[String](&quot;location&quot;)
  def sku = column[String](&quot;sku&quot;)
  def revision = column[Int](&quot;revision&quot;)
  def * = (location, sku, revision)
}
</code></pre>
<p>Given this additional table, here is one way we might include it in our create functions database transaction:</p><pre><code class="language-scala">      createLock &lt;- {
        TableQuery[InventoryDoubleRecordLocks].filter(x =&gt; x.location === location &amp;&amp; x.sku === sku).forUpdate.result
      }

      _ &lt;- {
        createLock match {
          case Seq((`location`, `sku`, _)) =&gt;
            val updateLock = TableQuery[InventoryDoubleRecordLocks]
            val q = for { x &lt;- updateLock if x.location === location &amp;&amp; x.sku === sku } yield x.revision
            q.update(createLock.head._3 + 1)
          case _ =&gt;
            // Create if no record lock existed
            TableQuery[InventoryDoubleRecordLocks] += (location, sku, 0)
        }
      }
</code></pre>
<p>This has two main pieces, attempt to read the record <code>FOR UPDATE</code> Then we support creating a new record if this is the first time the Location/SKU pair has been seen.</p><ul><li>A helpful reference here would be to review: <a href="https://www.postgresql.org/docs/11/explicit-locking.html?ref=honstain.com">https://www.postgresql.org/docs/11/explicit-locking.html</a></li></ul><p>I make no claim that this is the best solution to the problem, but it illustrates one way to maintain consistency. This solution does not make use of foreign key constraints or joins. </p><h2 id="summary">Summary</h2><p>We have taken a tour Scalatra and Slick while implementing a very rudimentary service for tracking physical inventory (tracking quantities of a SKU by location). There are many ways that you could solve this problem, and I have tried to outline some of the options and what the trade-offs are.</p><p>The primary goal of this set of blogs on this toy inventory system, was to learn and share (I was exploring as I went). I am still inexperienced with Scala and the ecosystem (while being reasonably comfortable with PostgreSQL). </p><p>I found Scalatra and Slick reasonably difficult to adapt to, probably because I am still trying to write Java and JDBI. I humbly admit my weakness here.</p><p>I had originally set out to compare the transfer logic between the single record design (one DB record to track the qty of a location/SKU pair) with the double-entry model covered in this post. I have implemented a transfer function (which you can find here <a href="https://bitbucket.org/honstain/scalatra-double-record-inventory?ref=honstain.com">https://bitbucket.org/honstain/scalatra-double-record-inventory</a>) but I will not create a special blog post to cover it. I think the create example here is sufficient to illustrate the logic and I would like to move on from Scalatra and experiment with the Play framework <a href="https://www.playframework.com/?ref=honstain.com">https://www.playframework.com/</a>.</p><h3 id="want-to-know-more-about-double-entry-accounting">Want to Know More About Double-Entry Accounting?</h3><p>Some initial references that are worth considering if you would like to further familiarize yourself with double record/entry accounting:</p><ul>
<li>Martin Folwer has some excellent posts that I referenced
<ul>
<li>Start here for a nice primer on double-entry accounting along with some historical background on the practice <a href="https://www.martinfowler.com/eaaDev/AccountingNarrative.html?ref=honstain.com">https://www.martinfowler.com/eaaDev/AccountingNarrative.html</a></li>
<li><a href="https://www.martinfowler.com/eaaDev/AccountingTransaction.html?ref=honstain.com">https://www.martinfowler.com/eaaDev/AccountingTransaction.html</a></li>
<li><a href="https://www.martinfowler.com/eaaDev/Account.html?ref=honstain.com">https://www.martinfowler.com/eaaDev/Account.html</a></li>
</ul>
</li>
<li>StackOverflow has some interesting debates on the matter
<ul>
<li><a href="https://stackoverflow.com/questions/287097/inventory-database-design?ref=honstain.com">https://stackoverflow.com/questions/287097/inventory-database-design</a></li>
<li><a href="https://stackoverflow.com/questions/4373968/database-design-calculating-the-account-balance?ref=honstain.com">https://stackoverflow.com/questions/4373968/database-design-calculating-the-account-balance</a></li>
</ul>
</li>
<li>Michael Wigley authored an interesting article on double entry accounting
<ul>
<li><a href="https://stackoverflow.com/questions/4373968/database-design-calculating-the-account-balance?ref=honstain.com">https://stackoverflow.com/questions/4373968/database-design-calculating-the-account-balance</a></li>
</ul>
</li>
</ul>
<p></p><h3 id></h3>]]></content:encoded></item><item><title><![CDATA[Scalatra for Double Record Accounting]]></title><description><![CDATA[<p></p><p>In this post, we will explore an alternative database schema design for tracking physical inventory. Our <a href="https://honstain.com/inventory-transfer-row-locking/">previous posts</a> focused on using a single database record to model a distinct location and SKU (location and SKU being represented as basic strings in this example). But what would it look like if</p>]]></description><link>https://honstain.com/scalatra-and-slick-for-double/</link><guid isPermaLink="false">65b526ba7a5d430e36b8ebfe</guid><category><![CDATA[Scala]]></category><category><![CDATA[Scalatra]]></category><category><![CDATA[Slick]]></category><category><![CDATA[PostgreSQL]]></category><dc:creator><![CDATA[Anthony Honstain]]></dc:creator><pubDate>Sun, 26 May 2019 17:59:43 GMT</pubDate><media:content url="https://honstain.com/content/images/2019/05/scala_double_entry_create_inventory.JPG" medium="image"/><content:encoded><![CDATA[<img src="https://honstain.com/content/images/2019/05/scala_double_entry_create_inventory.JPG" alt="Scalatra for Double Record Accounting"><p></p><p>In this post, we will explore an alternative database schema design for tracking physical inventory. Our <a href="https://honstain.com/inventory-transfer-row-locking/">previous posts</a> focused on using a single database record to model a distinct location and SKU (location and SKU being represented as basic strings in this example). But what would it look like if we designed our schema around the idea of double-entry accounting? That would mean the quantity of a given location and SKU would be defined by the sum of multiple database records (we couldn&apos;t just look at a single record anymore to find its current quantity).</p><p>A good primer for double-entry accounting is Martin Fowler&apos;s blog post on the subject <a href="https://www.martinfowler.com/eaaDev/AccountingNarrative.html?ref=honstain.com">https://www.martinfowler.com/eaaDev/AccountingNarrative.html</a>.</p><h3 id="previous-blog-posts-in-this-series">Previous Blog Posts In This Series</h3><p>This post will significantly reference several of my previous blog posts. We now seek to draw a comparison on this double-entry schema to a design that uses a single record (for each location and SKU relationship). I would venture to guess that most developers would not start with a double-entry model.</p><ul><li>Part 1 - <a href="https://honstain.com/scalatra-inventory-management-service/">Creating a Scalatra Inventory Management Service</a></li><li>Part 2 - <a href="https://honstain.com/slick-upsert-and-select/">Implementing Create/Update in Slick</a></li><li>Part 3 - <a href="https://honstain.com/inventory-management-transfer-start/">Inventory Management Transfer</a></li><li>Part 4 - <a href="https://honstain.com/inventory-transfer-row-locking/">Inventory Management Transfer with Row Level Locking</a></li></ul><h3 id="source-code">Source Code</h3><ul><li>A basic skeleton of a Scalatra service you could use to build as you go while reading: <a href="https://bitbucket.org/honstain/scalatra-single-record-transfer-service/?ref=honstain.com">https://bitbucket.org/honstain/scalatra-single-record-transfer-service/</a> You would need to create a new DAO and wire up the REST endpoints and tests.</li><li>The completed solution: <a href="https://bitbucket.org/honstain/scalatra-double-record-transfer-service/src/master/?ref=honstain.com">https://bitbucket.org/honstain/scalatra-double-record-transfer-service/src/master/</a></li></ul><h2 id="designing-the-schema-for-double-entry">Designing the Schema for Double-Entry</h2><p>Each transfer (physical movement of goods from one location to another) will be modeled with two new records, one decrementing the qty from the source location and one incrementing the qty for the destination.</p><!--kg-card-begin: markdown--><p>Starting with a single location <code>LOC-01</code> that contains 2 units of <code>SKU-01</code></p>
<table>
<thead>
<tr>
<th>id</th>
<th>location</th>
<th>sku</th>
<th>qty</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>LOC-01</td>
<td>SKU-01</td>
<td>2</td>
</tr>
</tbody>
</table>
<p>If we wanted to move 1 unit of <code>SKU-01</code> to the location <code>LOC-02</code>, instead of updating record id:1 we would create two new records.</p>
<table>
<thead>
<tr>
<th>id</th>
<th>location</th>
<th>sku</th>
<th>qty</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>LOC-01</td>
<td>SKU-01</td>
<td>2</td>
</tr>
<tr>
<td>2</td>
<td>LOC-01</td>
<td>SKU-01</td>
<td>-1</td>
</tr>
<tr>
<td>3</td>
<td>LOC-02</td>
<td>SKU-01</td>
<td>1</td>
</tr>
</tbody>
</table>
<p>This means that to calculate the current quantity of <code>LOC-01</code> and <code>SKU-01</code> we would need to sum the quantity across records id:1 and id:2.</p>
<p><strong>Using this design, we are never updating old database records, we are only creating new ones.</strong></p>
<!--kg-card-end: markdown--><p>All our operations (creating, moving, and decrementing inventory) would be handled by a DB insert. Because we treat existing records as immutable, there are fewer instances where the DB needs to manage the overhead of locking specific rows. We saw in our previous post how we ended up doing some fine-grained row-level locking in order to provide consistency. We ended up locking everything in the transaction, the source and the destination, which can have an impact on all the other database queries that might be running against that table.</p><p>Just like in our <a href="https://honstain.com/scalatra-inventory-management-service/">original example</a>, let&apos;s start with a DAO and way to retrieve all the database records for testing.</p><!--kg-card-begin: markdown--><pre><code class="language-scala">package org.bitbucket.honstain.inventory.dao

import org.slf4j.{Logger, LoggerFactory}
import slick.jdbc.{PostgresProfile, TransactionIsolation}
import slick.jdbc.PostgresProfile.api._

import scala.concurrent.Future
import scala.concurrent.ExecutionContext.Implicits.global

object TRANSACTION {
  val ADJUST = &quot;adjust&quot;
  val TRANSFER = &quot;transfer&quot;
}

case class InventoryDoubleRecord(
                                  id: Option[Int],
                                  sku: String,
                                  qty: Int,
                                  txnType: String,
                                  location: String
                                )

class InventoryDoubleRecords(tag: Tag) extends Table[InventoryDoubleRecord](tag, &quot;inventory_double&quot;) {
  def id = column[Int](&quot;id&quot;, O.PrimaryKey, O.AutoInc)
  def sku = column[String](&quot;sku&quot;)
  def qty = column[Int](&quot;qty&quot;)
  def txnType = column[String](&quot;type&quot;)
  def location = column[String](&quot;location&quot;)
  def * =
    (id.?, sku, qty, txnType, location) &lt;&gt; (InventoryDoubleRecord.tupled, InventoryDoubleRecord.unapply)
}

object InventoryDoubleRecordDao extends TableQuery(new InventoryDoubleRecords(_)) {

  val logger: Logger = LoggerFactory.getLogger(getClass)

  def findAll(db: PostgresProfile.backend.DatabaseDef): Future[Seq[InventoryDoubleRecord]] = {
    db.run(this.result)
  }

  def findAll(db: PostgresProfile.backend.DatabaseDef): Future[Seq[(String, String, Option[Int])]] = {
    val groupByQuery = this.groupBy(x =&gt; (x.sku, x.location))
      .map{ case ((sku, location), group) =&gt; (sku, location, group.map(_.qty).sum) }
      .result
    db.run(groupByQuery)
  }
}
</code></pre>
<!--kg-card-end: markdown--><p>We will also make a basic test to start us off.</p><!--kg-card-begin: markdown--><pre><code class="language-scala">package org.bitbucket.honstain.inventory.dao

import org.bitbucket.honstain.PostgresSpec

import org.scalatest.BeforeAndAfter
import org.scalatra.test.scalatest._
import slick.dbio.DBIO
import slick.jdbc.PostgresProfile.api._

import scala.concurrent.Await
import scala.concurrent.duration.Duration


class InventoryDoubleRecordDaoTests extends ScalatraFunSuite with BeforeAndAfter with PostgresSpec {

  def createInventoryTable: DBIO[Int] =
    sqlu&quot;&quot;&quot;
          CREATE TABLE inventory_double
          (
            id bigserial NOT NULL,
            sku text,
            qty integer,
            type text,
            location text,
            CONSTRAINT pk_double PRIMARY KEY (id)
          );
      &quot;&quot;&quot;
  def dropInventoryTable: DBIO[Int] =
    sqlu&quot;&quot;&quot;
          DROP TABLE IF EXISTS inventory_double;
      &quot;&quot;&quot;

  before {
    Await.result(database.run(createInventoryTable), Duration.Inf)
  }

  after {
    Await.result(database.run(dropInventoryTable), Duration.Inf)
  }

  val TEST_SKU = &quot;NewSku&quot;
  val BIN_01 = &quot;Bin-01&quot;
  val BIN_02 = &quot;Bin-02&quot;

  test(&quot;findAll when empty&quot;) {
    val futureFind = InventoryDoubleRecordDao.findAll(database)
    val findResult: Seq[InventoryDoubleRecord] = Await.result(futureFind, Duration.Inf)

    findResult should equal(List())
  }
  
    test(&quot;findAll with single location and SKU but multiple records&quot;) {
    val inventoryTable = TableQuery[InventoryDoubleRecords] ++= Seq(
      InventoryDoubleRecord(None, TEST_SKU, 1, TRANSACTION.ADJUST, BIN_01),
      InventoryDoubleRecord(None, TEST_SKU, 3, TRANSACTION.ADJUST, BIN_01),
      InventoryDoubleRecord(None, TEST_SKU, -1, TRANSACTION.ADJUST, BIN_01)
    )
    Await.result(database.run(inventoryTable), Duration.Inf)

    val futureFind = InventoryDoubleRecordDao.findAll(database)
    val findResult: Seq[(String, String, Option[Int])] = Await.result(futureFind, Duration.Inf)

    findResult should equal(List((TEST_SKU, BIN_01, Some(3))))
  }

  test(&quot;findAll with multiple location+SKU and multiple records&quot;) {
    val inventoryTable = TableQuery[InventoryDoubleRecords] ++= Seq(
      InventoryDoubleRecord(None, TEST_SKU, 1, TRANSACTION.ADJUST, BIN_01),
      InventoryDoubleRecord(None, TEST_SKU, 3, TRANSACTION.ADJUST, BIN_02),
      InventoryDoubleRecord(None, TEST_SKU, -1, TRANSACTION.ADJUST, BIN_01)
    )
    Await.result(database.run(inventoryTable), Duration.Inf)

    val futureFind = InventoryDoubleRecordDao.findAll(database)
    val findResult: Seq[(String, String, Option[Int])] = Await.result(futureFind, Duration.Inf)

    findResult should contain only ((TEST_SKU, BIN_02, Some(3)), (TEST_SKU, BIN_01, Some(0)))
  }
}
</code></pre>
<!--kg-card-end: markdown--><p>This should look very similar to our <a href="https://honstain.com/scalatra-inventory-management-service/">original example</a> (that modeled each location and SKU relationship with a single DB record), except for the fact that we now do a SQL aggregation on the results to find all the current inventory information.</p><!--kg-card-begin: markdown--><pre><code class="language-scala">  def findAll(db: PostgresProfile.backend.DatabaseDef): Future[Seq[(String, String, Option[Int])]] = {
    val groupByQuery = this.groupBy(x =&gt; (x.sku, x.location))
      .map{ case ((sku, location), group) =&gt; (sku, location, group.map(_.qty).sum) }
      .result
    db.run(groupByQuery)
  }
</code></pre>
<!--kg-card-end: markdown--><p>This Slick query is roughly equivalent to the following SQL:</p><!--kg-card-begin: markdown--><pre><code class="language-SQL">SELECT sku, location, SUM(qty)
FROM inventory_double
GROUP BY sku, location
</code></pre>
<!--kg-card-end: markdown--><p><strong>Why the SQL aggregation and tuple return type?</strong> How we model the record in the database with the class <code>InventoryDoubleRecord</code> is now somewhat abstracted from how our business logic might want to handle the date. The <code>InventoryDoubleRecord</code> class has <code>id</code> and <code>txnType</code> columns which are not necessarily applicable when a client is asking the service how many items are in a location or how many units of a SKU are available. Hence we are returning a tuple from our <code>findAll</code> function, the reader could map this to a new class if they preferred.</p><h2 id="create-insert-update-logic">Create/Insert/Update Logic</h2><p>Now we want the ability to create new inventory, and update the qty of an existing location + SKU. You might refer to this sort of logic as an adjustment (which I track via the TRANSACTION enum). Where <strong>previously </strong>we could take a lock on a single record and do an atomic create or update:</p><!--kg-card-begin: markdown--><pre><code class="language-sql">INSERT INTO inventory_single (sku, qty, location)
VALUES (&apos;SKU-01&apos;, 3, &apos;LOC-01&apos;)
ON CONFLICT ON CONSTRAINT inventory_single_sku_location_key
DO UPDATE SET qty = EXCLUDED.qty;
</code></pre>
<!--kg-card-end: markdown--><p>Now we have to consider multiple rows in order to find out the current quantity of a location and SKU. Our initial function definition could look like:</p><!--kg-card-begin: markdown--><pre><code class="language-scala">  def create(db: PostgresProfile.backend.DatabaseDef,
             sku: String,
             qty: Int,
             location: String
            ): Future[InventoryDoubleRecord] = {
    val insert = for {
      // Find the current count for this Location and SKU
      initGroupBy &lt;- // TODO - query to find existing qty

      // Insert a new record that will result in the desired qty
      _ &lt;- // TODO - insert a new record

      // Return the updated value
      newRecordBySku &lt;- // TODO return the current qty for the SKU and Location
    } yield newRecordBySku

    db.run(insert.transactionally)
  }
</code></pre>
<!--kg-card-end: markdown--><p>The structure we just outlined uses a Slick transaction and for comprehension to group a set of queries (lookup, insert, and lookup). We start first by doing the lookup component.</p><!--kg-card-begin: markdown--><ul>
<li><strong>Lookup existing qty</strong> - find the qty for the Location/SKU pair if it exists
<ul>
<li>A basic Slick aggregartion with sum, will give us a result of something like <code>Future[Seq[(String, String, Option[Int])]]</code>.  We should be dealing with just a single record here given our schema design so we can use <code>headOption</code> to grab the head of the Sequence.</li>
</ul>
</li>
</ul>
<pre><code class="language-scala">this.filter(_.sku === sku).filter(_.location === location)
        .groupBy(x =&gt; x.sku)
        .map{ case (_, group) =&gt; (sku, group.map(_.qty).sum) }
        .result.headOption
</code></pre>
<br>
<ul>
<li><strong>Insert</strong> - create a new database record.
<ul>
<li>Case 1 - We found no existing records for the SKU and Location pair.</li>
<li>Case 2 - We have an existing record and the QTY is needed to determine what the new qty to insert will be.</li>
</ul>
</li>
</ul>
<pre><code class="language-scala">      _ &lt;- {
        initGroupBy match {
          case None =&gt;
            this += InventoryDoubleRecord(Some(0), sku, qty, TRANSACTION.ADJUST, location)
          case Some((_, Some(existingQty))) =&gt;
            logger.debug(s&quot;FOUND record $qty - $existingQty&quot;)
            this += InventoryDoubleRecord(Some(0), sku, qty - existingQty, TRANSACTION.ADJUST, location)
          case _ =&gt; DBIO.failed(new Exception(&quot;Insert for create failed&quot;))
        }
      }
</code></pre>
<br>
<ul>
<li><strong>Lookup current quantity</strong> - get the current qty for the Location/SKU pair after our insert.
<ul>
<li>I made an effort to use the return type <code>InventoryDoubleRecord</code> class so we could more easily draw a parallel with our single record schema design. It also illustrates an example of mapping the results (even I am not personally a fan of this code).</li>
</ul>
</li>
</ul>
<pre><code class="language-scala">      newRecordGroupBy &lt;- this.filter(_.sku === sku).filter(_.location === location)
        .groupBy(x =&gt; x.sku).map{ case (_, group) =&gt; (sku, group.map(_.qty).sum) }
        .result.headOption
      newRecordBySku &lt;- {
        newRecordGroupBy match {
          case Some((_, Some(newQty))) =&gt;
            DBIO.successful (InventoryDoubleRecord (None, sku, newQty, TRANSACTION.ADJUST, location) )
          case _ =&gt;
            DBIO.failed(new Exception(&quot;Insert for create failed&quot;))
        }
      }
</code></pre>
<br><!--kg-card-end: markdown--><p>We want some tests to cover this logic as well. These are not beautiful or fully comprehensive tests, but hopefully they are illustrative of the of logic we are testing in the DAO. They are a bit on the verbose side.</p><!--kg-card-begin: markdown--><pre><code class="language-scala">  test(&quot;create when empty&quot;) {
    val futureCreate = InventoryDoubleRecordDao.create(database, TEST_SKU, 2, BIN_01)
    val createResult = Await.result(futureCreate, Duration.Inf)
    createResult should equal(InventoryDoubleRecord(None, TEST_SKU, 2, TRANSACTION.ADJUST, BIN_01))

    val futureFind = InventoryDoubleRecordDao.findAllRaw(database)
    val findResult: Seq[InventoryDoubleRecord] = Await.result(futureFind, Duration.Inf)
    findResult should equal(List(InventoryDoubleRecord(Option(1), TEST_SKU, 2, TRANSACTION.ADJUST, BIN_01)))
  }

  test(&quot;create with existing record&quot;) {
    Await.result(InventoryDoubleRecordDao.create(database, TEST_SKU, 2, BIN_01), Duration.Inf)

    val futureUpdate = InventoryDoubleRecordDao.create(database, TEST_SKU, 1, BIN_01)
    val updateResult= Await.result(futureUpdate, Duration.Inf)
    updateResult should equal(InventoryDoubleRecord(None, TEST_SKU, 1, TRANSACTION.ADJUST, BIN_01))

    val futureFindAllRaw = InventoryDoubleRecordDao.findAllRaw(database)
    val findResult: Seq[InventoryDoubleRecord] = Await.result(futureFindAllRaw, Duration.Inf)
    findResult should equal(List(
      InventoryDoubleRecord(Option(1), TEST_SKU, 2, TRANSACTION.ADJUST, BIN_01),
      InventoryDoubleRecord(Option(2), TEST_SKU, -1, TRANSACTION.ADJUST, BIN_01),
    ))
  }
</code></pre>
<!--kg-card-end: markdown--><h3 id="final-create-insert-update-code">Final Create/Insert/Update Code</h3><p>This gives us the following create function:</p><!--kg-card-begin: markdown--><pre><code class="language-scala">  def create(db: PostgresProfile.backend.DatabaseDef,
             sku: String,
             qty: Int,
             location: String
            ): Future[InventoryDoubleRecord] = {
    val insert = for {
      // Find the current count for this location and SKU
      initGroupBy &lt;- this.filter(_.sku === sku).filter(_.location === location)
        .groupBy(x =&gt; x.sku).map{ case (_, group) =&gt; (sku, group.map(_.qty).sum) }
        .result.headOption

      // Insert a new record that will result in the desired qty
      _ &lt;- {
        initGroupBy match {
          case None =&gt;
            this += InventoryDoubleRecord(None, sku, qty, TRANSACTION.ADJUST, location)
          case Some((_, Some(existingQty))) =&gt;
            logger.debug(s&quot;FOUND record $qty - $existingQty&quot;)
            this += InventoryDoubleRecord(None, sku, qty - existingQty, TRANSACTION.ADJUST, location)
          case _ =&gt; DBIO.failed(new Exception(&quot;Insert for create failed&quot;))
        }
      }

      // Return the updated value
      newRecordGroupBy &lt;- this.filter(_.sku === sku).filter(_.location === location)
        .groupBy(x =&gt; x.sku).map{ case (_, group) =&gt; (sku, group.map(_.qty).sum) }
        .result.headOption
      newRecordBySku &lt;- {
        newRecordGroupBy match {
          case Some((_, Some(newQty))) =&gt;
            DBIO.successful (InventoryDoubleRecord (None, sku, newQty, TRANSACTION.ADJUST, location) )
          case _ =&gt;
            DBIO.failed(new Exception(&quot;Insert for create failed&quot;))
        }
      }
    } yield newRecordBySku

    db.run(insert.transactionally)
  }
</code></pre>
<!--kg-card-end: markdown--><p>The reader may notice that this query is susceptible to problems with consistency when we have concurrent calls. We have not set an isolation level for our transaction, but just like our problems with inventory transfer <a href="https://honstain.com/inventory-transfer-row-locking/">http://honstain.com/inventory-transfer-row-locking/</a> we can get undesirable behavior if multiple requests to create for a given location/SKU overlap.</p><h3 id="demonstrate-the-consistency-problem">Demonstrate the Consistency Problem</h3><p>If you use siege to pound on the service with a few users you can pretty quickly observe some undesirable behavior.</p><!--kg-card-begin: markdown--><pre><code class="language-text">siege -v -c2 -r10 --content-type &quot;application/json&quot; -f siege_urls.txt

#### siege_urls.text ####
127.0.0.1:8080/double
127.0.0.1:8080/double POST {&quot;sku&quot;: &quot;SKU-01&quot;,&quot;qty&quot;: 1,&quot;location&quot;: &quot;LOC-01&quot;}
127.0.0.1:8080/double POST {&quot;sku&quot;: &quot;SKU-01&quot;,&quot;qty&quot;: 17,&quot;location&quot;: &quot;LOC-01&quot;}
</code></pre>
<!--kg-card-end: markdown--><p>We would hope to only ever see a quantity of 1 or 17 (with corresponding adjustment records in the DB).</p><!--kg-card-begin: markdown--><table>
<thead>
<tr>
<th>id</th>
<th>sku</th>
<th>qty</th>
<th>type</th>
<th>location</th>
</tr>
</thead>
<tbody>
<tr>
<td>448</td>
<td>SKU-01</td>
<td>1</td>
<td>adjust</td>
<td>LOC-01</td>
</tr>
<tr>
<td>449</td>
<td>SKU-01</td>
<td>1</td>
<td>adjust</td>
<td>LOC-01</td>
</tr>
<tr>
<td>450</td>
<td>SKU-01</td>
<td>15</td>
<td>adjust</td>
<td>LOC-01</td>
</tr>
<tr>
<td>451</td>
<td>SKU-01</td>
<td>15</td>
<td>adjust</td>
<td>LOC-01</td>
</tr>
<tr>
<td>452</td>
<td>SKU-01</td>
<td>-31</td>
<td>adjust</td>
<td>LOC-01</td>
</tr>
<tr>
<td>453</td>
<td>SKU-01</td>
<td>0</td>
<td>adjust</td>
<td>LOC-01</td>
</tr>
</tbody>
</table>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><pre><code class="language-text">#### Example Logging from Sacalatra Service ####
10:41:52.365 [scala-execution-context-global-35] DEBUG o.b.h.i.d.InventoryDoubleRecordDao$ - INSERT need qty: 17 existing: 2 for sku: SKU-01 location: LOC-01
10:41:52.372 [scala-execution-context-global-35] DEBUG o.b.h.i.d.InventoryDoubleRecordDao$ - INSERT need qty: 17 existing: 2 for sku: SKU-01 location: LOC-01
10:41:52.387 [qtp1637506559-12] DEBUG o.b.h.inventory.app.ToyInventory - GET: location: SKU-01 sku: LOC-01 qty: Some(17)
10:41:56.813 [qtp1637506559-17] DEBUG o.b.h.inventory.app.ToyInventory - GET: location: SKU-01 sku: LOC-01 qty: Some(32)
10:41:56.818 [scala-execution-context-global-39] DEBUG o.b.h.i.d.InventoryDoubleRecordDao$ - INSERT need qty: 1 existing: 32 for sku: SKU-01 location: LOC-01
10:41:56.827 [scala-execution-context-global-39] DEBUG o.b.h.i.d.InventoryDoubleRecordDao$ - INSERT need qty: 1 existing: 1 for sku: SKU-01 location: LOC-01
10:41:56.839 [scala-execution-context-global-38] DEBUG o.b.h.i.d.InventoryDoubleRecordDao$ - INSERT need qty: 17 existing: 1 for sku: SKU-01 location: LOC-01
10:41:56.862 [scala-execution-context-global-38] DEBUG o.b.h.i.d.InventoryDoubleRecordDao$ - INSERT need qty: 17 existing: 17 for sku: SKU-01 location: LOC-01
</code></pre>
<!--kg-card-end: markdown--><p>I would encourage you to play with several different isolation levels and observe how PostgreSQL handles this sort of query (read following by an insert).</p><!--kg-card-begin: markdown--><pre><code class="language-scala">db.run(insert.transactionally.withTransactionIsolation(TransactionIsolation.Serializable))
</code></pre>
<!--kg-card-end: markdown--><h3 id="summary-of-the-initial-create-insert-update-logic">Summary of the Initial Create/Insert/Update Logic</h3><p>We have created a basic design for tracking inventory using a DB schema based on double-entry accounting. It still has some gaps with consistency (just like our initial attempt that relied on a single DB record for each location and SKU pair), but hopefully this gives you some ideas.</p><p>Source code for this blog post: <a href="https://bitbucket.org/honstain/scalatra-double-record-transfer-service/src/master/?ref=honstain.com">https://bitbucket.org/honstain/scalatra-double-record-transfer-service/src/master/</a></p><h3 id="increasing-consistency-of-the-create-insert-update-logic">Increasing Consistency of the Create/Insert/Update Logic </h3><p>One idea we can explore is taking a pessimistic lock on the location and SKU. What would that imply in our current schema?</p><ul><li>Using a <code>SELECT FOR UPDATE</code> would mean that we lock all the records needed to compute the current value, this would be worse (in terms of DB overhead to support locking) than our schema that used a single record to track the quantity.</li></ul><!--kg-card-begin: markdown--><table>
<thead>
<tr>
<th>id</th>
<th>location</th>
<th>sku</th>
<th>qty</th>
<th>type</th>
</tr>
</thead>
<tbody>
<tr>
<td>474</td>
<td>LOC-01</td>
<td>SKU-01</td>
<td>1</td>
<td>adjust</td>
</tr>
<tr>
<td>475</td>
<td>LOC-01</td>
<td>SKU-01</td>
<td>1</td>
<td>adjust</td>
</tr>
<tr>
<td>476</td>
<td>LOC-01</td>
<td>SKU-01</td>
<td>1</td>
<td>adjust</td>
</tr>
<tr>
<td>477</td>
<td>LOC-01</td>
<td>SKU-01</td>
<td>1</td>
<td>adjust</td>
</tr>
<tr>
<td>478</td>
<td>LOC-01</td>
<td>SKU-01</td>
<td>-3</td>
<td>adjust</td>
</tr>
<tr>
<td>479</td>
<td>LOC-01</td>
<td>SKU-01</td>
<td>0</td>
<td>adjust</td>
</tr>
</tbody>
</table>
<!--kg-card-end: markdown--><p>We will instead create a new table just to support this locking behavior.</p><!--kg-card-begin: markdown--><pre><code class="language-sql">CREATE TABLE inventory_lock
(
  location text,
  sku text,
  revision integer,
  CONSTRAINT pk_lock PRIMARY KEY (location, sku)
);
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><pre><code class="language-scala">class InventoryDoubleRecordLocks(tag: Tag) extends Table[(String, String, Int)](tag, &quot;inventory_lock&quot;) {
  def location = column[String](&quot;location&quot;)
  def sku = column[String](&quot;sku&quot;)
  def revision = column[Int](&quot;revision&quot;)
  def * = (location, sku, revision)
}
</code></pre>
<!--kg-card-end: markdown--><p>Given this additional table, here is one way we might include it in our create functions database transaction:</p><!--kg-card-begin: markdown--><pre><code class="language-scala">      createLock &lt;- {
        TableQuery[InventoryDoubleRecordLocks].filter(x =&gt; x.location === location &amp;&amp; x.sku === sku).forUpdate.result
      }

      _ &lt;- {
        createLock match {
          case Seq((`location`, `sku`, _)) =&gt;
            val updateLock = TableQuery[InventoryDoubleRecordLocks]
            val q = for { x &lt;- updateLock if x.location === location &amp;&amp; x.sku === sku } yield x.revision
            q.update(createLock.head._3 + 1)
          case _ =&gt;
            // Create if no record lock existed
            TableQuery[InventoryDoubleRecordLocks] += (location, sku, 0)
        }
      }
</code></pre>
<!--kg-card-end: markdown--><p>This has two main pieces, attempt to read the record <code>FOR UPDATE</code> Then we support creating a new record if this is the first time the Location/SKU pair has been seen.</p><ul><li>A helpful reference here would be to review: <a href="https://www.postgresql.org/docs/11/explicit-locking.html?ref=honstain.com">https://www.postgresql.org/docs/11/explicit-locking.html</a></li></ul><p>I make no claim that this is the best solution to the problem, but it illustrates one way to maintain consistency. This solution does not make use of foreign key constraints or joins. </p><h2 id="summary">Summary</h2><p>We have taken a tour Scalatra and Slick while implementing a very rudimentary service for tracking physical inventory (tracking quantities of a SKU by location). There are many ways that you could solve this problem, and I have tried to outline some of the options and what the trade-offs are.</p><p>The primary goal of this set of blogs on this toy inventory system, was to learn and share (I was exploring as I went). I am still inexperienced with Scala and the ecosystem (while being reasonably comfortable with PostgreSQL). </p><p>I found Scalatra and Slick reasonably difficult to adapt to, probably because I am still trying to write Java and JDBI. I humbly admit my weakness here.</p><p>I had originally set out to compare the transfer logic between the single record design (one DB record to track the qty of a location/SKU pair) with the double-entry model covered in this post. I have implemented a transfer function (which you can find here <a href="https://bitbucket.org/honstain/scalatra-double-record-inventory?ref=honstain.com">https://bitbucket.org/honstain/scalatra-double-record-inventory</a>) but I will not create a special blog post to cover it. I think the create example here is sufficient to illustrate the logic and I would like to move on from Scalatra and experiment with the Play framework <a href="https://www.playframework.com/?ref=honstain.com">https://www.playframework.com/</a>.</p><h3 id="want-to-know-more-about-double-entry-accounting">Want to Know More About Double-Entry Accounting?</h3><p>Some initial references that are worth considering if you would like to further familiarize yourself with double record/entry accounting:</p><!--kg-card-begin: markdown--><ul>
<li>Martin Folwer has some excellent posts that I referenced
<ul>
<li>Start here for a nice primer on double-entry accounting along with some historical background on the practice <a href="https://www.martinfowler.com/eaaDev/AccountingNarrative.html?ref=honstain.com">https://www.martinfowler.com/eaaDev/AccountingNarrative.html</a></li>
<li><a href="https://www.martinfowler.com/eaaDev/AccountingTransaction.html?ref=honstain.com">https://www.martinfowler.com/eaaDev/AccountingTransaction.html</a></li>
<li><a href="https://www.martinfowler.com/eaaDev/Account.html?ref=honstain.com">https://www.martinfowler.com/eaaDev/Account.html</a></li>
</ul>
</li>
<li>StackOverflow has some interesting debates on the matter
<ul>
<li><a href="https://stackoverflow.com/questions/287097/inventory-database-design?ref=honstain.com">https://stackoverflow.com/questions/287097/inventory-database-design</a></li>
<li><a href="https://stackoverflow.com/questions/4373968/database-design-calculating-the-account-balance?ref=honstain.com">https://stackoverflow.com/questions/4373968/database-design-calculating-the-account-balance</a></li>
</ul>
</li>
<li>Michael Wigley authored an interesting article on double entry accounting
<ul>
<li><a href="https://stackoverflow.com/questions/4373968/database-design-calculating-the-account-balance?ref=honstain.com">https://stackoverflow.com/questions/4373968/database-design-calculating-the-account-balance</a></li>
</ul>
</li>
</ul>
<!--kg-card-end: markdown--><p></p><h3></h3>]]></content:encoded></item><item><title><![CDATA[Inventory Transfer Consistently]]></title><description><![CDATA[<p>In our <a href="https://honstain.com/inventory-management-transfer-start/">previous post</a>, we designed a basic system to track the physical transfer of goods between two physical locations.</p><p>Previous posts for our inventory service:</p><ul><li>Part 1 - <a href="https://honstain.com/scalatra-inventory-management-service/">Creating a Scalatra Inventory Management Service</a></li><li>Part 2 - <a href="https://honstain.com/slick-upsert-and-select/">Implementing Create/Update in Slick</a></li><li>Part 3 - <a href="https://honstain.com/inventory-management-transfer-start/">Inventory Management Transfer</a></li></ul><p><strong>WARNING</strong></p>]]></description><link>https://honstain.com/inventory-transfer-row-locking-2/</link><guid isPermaLink="false">65b52aaf7a5d430e36b8ec8c</guid><category><![CDATA[Slick]]></category><category><![CDATA[Scalatra]]></category><category><![CDATA[Scala]]></category><category><![CDATA[PostgreSQL]]></category><dc:creator><![CDATA[Anthony Honstain]]></dc:creator><pubDate>Thu, 18 Apr 2019 15:39:46 GMT</pubDate><media:content url="https://honstain.com/content/images/2019/04/SlickRowLevelLocking.PNG" medium="image"/><content:encoded><![CDATA[<img src="https://honstain.com/content/images/2019/04/SlickRowLevelLocking.PNG" alt="Inventory Transfer Consistently"><p>In our <a href="https://honstain.com/inventory-management-transfer-start/">previous post</a>, we designed a basic system to track the physical transfer of goods between two physical locations.</p><p>Previous posts for our inventory service:</p><ul><li>Part 1 - <a href="https://honstain.com/scalatra-inventory-management-service/">Creating a Scalatra Inventory Management Service</a></li><li>Part 2 - <a href="https://honstain.com/slick-upsert-and-select/">Implementing Create/Update in Slick</a></li><li>Part 3 - <a href="https://honstain.com/inventory-management-transfer-start/">Inventory Management Transfer</a></li></ul><p><strong>WARNING </strong>- This may be difficult to follow if you haven&apos;t been working through the previous posts. Reviewing this repository that implements this material covered so far may help you <a href="https://bitbucket.org/honstain/scalatra-single-record-transfer-service/src/master/?ref=honstain.com">https://bitbucket.org/honstain/scalatra-single-record-transfer-service/src/master/</a></p><p>I created a basic diagram to help better visualize the sort of relationship and model we are dealing with in this walk-through.</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2019/04/image-10.png" class="kg-image" alt="Inventory Transfer Consistently" loading="lazy"></figure><h2 id="enforcing-consistency-in-the-database">Enforcing Consistency in the Database</h2><p>When we left off in our <a href="https://honstain.com/inventory-management-transfer-start/">previous post</a> we had a solution that worked for the simple happy path, but would quickly manifest undesirable behavior under mild concurrency. </p><p>Our initial attempts focused on using a more restricted database isolation level. But as we saw from our testing, it exposed a lot of errors to the client (while using Serializable the database would detect and abort our transactions). We could attempt to retry, but we would need to think through any approach with retry holistically to make sure retry was still valid given the new state of the database.</p><p>Possible options consider going forward:</p><!--kg-card-begin: markdown--><ul>
<li>Retry our database serialization failures (if using the Serializable isolation level).
<ul>
<li>Remember that there are non-trivial performance implications to serializing, but like with anything it would be to your benefit to experiment and make an informed decision for your use case and read patterns.</li>
</ul>
</li>
<li>Pessimistic row level locking - we use the database to block access specific rows while we execute our logic and do the update. <a href="https://www.postgresql.org/docs/9.0/sql-select.html?ref=honstain.com#SQL-FOR-UPDATE-SHARE">https://www.postgresql.org/docs/9.0/sql-select.html#SQL-FOR-UPDATE-SHARE</a></li>
<li>Optimistic offline lock - by assuming the risk of conflict is low (especially if multiple services are involved) we rely on a revision id and some additional logic in the client/callers identify potential issues.
<ul>
<li>Martin Flower has a great reference <a href="https://martinfowler.com/eaaCatalog/optimisticOfflineLock.html?ref=honstain.com">https://martinfowler.com/eaaCatalog/optimisticOfflineLock.html</a></li>
</ul>
</li>
</ul>
<!--kg-card-end: markdown--><p>These are just a few directions you could go, you could even attempt to circumvent this problem entirely with some alternative technology solutions. Because this series of blog posts are centered around Scala, relational databases, and microservices, I will focus our exploration ways to work the problem using traditional database techniques.</p><h2 id="row-level-locking">Row Level Locking</h2><p>Our next attempt will be to use row level locking to help us prevent multiple concurrent users from conflicting with one another.</p><p>A excellent reference to review before continuing would be to review the PostgreSQL documentation on select for update SQL. <a href="https://www.postgresql.org/docs/9.0/sql-select.html?ref=honstain.com#SQL-FOR-UPDATE-SHARE">https://www.postgresql.org/docs/9.0/sql-select.html#SQL-FOR-UPDATE-SHARE</a></p><p>Slick provides us access to PostgreSQL row-level locking with the <code>forUpdate</code> method on the Query class.</p><!--kg-card-begin: markdown--><ul>
<li>Before we add row level locking - this is our original transfer function:</li>
</ul>
<pre><code class="language-scala">  def transfer(db: PostgresProfile.backend.DatabaseDef,
               sku: String,
               qty: Int,
               fromLocation: String,
               toLocation: String
              ): Future[Int] = {

    val insert = for {
      toRecord &lt;- {
        this.filter(x =&gt; x.location === toLocation &amp;&amp; x.sku === sku).result.headOption
      }
      fromRecord &lt;- {
        this.filter(x =&gt; x.location === fromLocation &amp;&amp; x.sku === sku).result.headOption
      }
      createUpdateDestination &lt;- {
        toRecord match {
          case Some(InventorySingleRecord(_, `sku`, destQty, `toLocation`)) =&gt;
            // Update
            val q = for { x &lt;- this if x.location === toLocation &amp;&amp; x.sku === sku } yield x.qty
            q.update(destQty + qty)
          case _ =&gt;
            // Create - this is likely susceptible to write skew
            this += InventorySingleRecord(Option.empty, sku, qty, toLocation)
        }
      }
      updateSource &lt;- {
        fromRecord match {
          case Some(InventorySingleRecord(_, `sku`, srcQty, `fromLocation`)) =&gt;

            val destinationQty: Int = if (toRecord.isDefined) toRecord.get.qty else 0
            logger.debug(s&quot;Transfer $qty from:$fromLocation (had qty:$srcQty) to $toLocation (had qty:$destinationQty)&quot;)

            val q = for { x &lt;- this if x.location === fromLocation &amp;&amp; x.sku === sku } yield x.qty
            q.update(srcQty - qty)
          case _ =&gt;
            DBIO.failed(new Exception(&quot;Failed to find source location&quot;))
        }
      }
    } yield updateSource
    db.run(insert.transactionally)
  }
</code></pre>
<ul>
<li>After we just need to add forUpdate to the initial select queries.</li>
</ul>
<pre><code class="language-scala">      toRecord &lt;- {
        this.filter(x =&gt; x.location === toLocation &amp;&amp; x.sku === sku).forUpdate.result.headOption
      }
      fromRecord &lt;- {
        this.filter(x =&gt; x.location === fromLocation &amp;&amp; x.sku === sku).forUpdate.result.headOption
      }
</code></pre>
<!--kg-card-end: markdown--><p>Let&apos;s test this change by trying to run more than one transfer in parallel. We will use the same approach from our <a href="https://honstain.com/inventory-management-transfer-start/">previous post (creating the initial transfer functionality)</a>, where we used the Linux command line tool siege.</p><pre><code class="language-text">siege -v -c2 -r10 --content-type &quot;application/json&quot; -f siege_urls.txt</code></pre><p>Please make sure your database data looks like the following:</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2019/04/image-11.png" class="kg-image" alt="Inventory Transfer Consistently" loading="lazy"></figure><p>The results of our concurrent requests looked like this (we had a few failures).</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2019/04/image-13.png" class="kg-image" alt="Inventory Transfer Consistently" loading="lazy"></figure><p>Looking to the logs from Scalatra, you can see PostgreSQL encountered a deadlock, looking at the response times also shows how painful the deadlock was to identify for the database.</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2019/04/image-14.png" class="kg-image" alt="Inventory Transfer Consistently" loading="lazy"></figure><p>At some point the requests overlapped in such a way that our fictitious user 929 had taken a lock on <code>LOC-01</code> and then wanted to lock <code>LOC-02</code> only to find that user 629 already had a lock on <code>LOC-02</code> and wanted <code>LOC-01</code>. It would be a good idea to review the PostgreSQL documentation on deadlocks <a href="https://www.postgresql.org/docs/9.5/explicit-locking.html?ref=honstain.com">https://www.postgresql.org/docs/9.5/explicit-locking.html</a>. I found it helpful to turn on query logging in PostgreSQL <a href="https://stackoverflow.com/questions/722221/how-to-log-postgresql-queries?ref=honstain.com">https://stackoverflow.com/questions/722221/how-to-log-postgresql-queries</a> if you want even more information about what is happening.</p><pre><code class="language-text">2019-04-18 07:22:23.790 PDT [8195] toyinventory@toyinventory ERROR:  deadlock detected
2019-04-18 07:22:23.790 PDT [8195] toyinventory@toyinventory DETAIL:  Process 8195 waits for ShareLock on transaction 14078; blocked by process 8193.
	Process 8193 waits for ShareLock on transaction 14079; blocked by process 8195.
	Process 8195: select &quot;id&quot;, &quot;sku&quot;, &quot;qty&quot;, &quot;location&quot; from &quot;inventory_single&quot; where (&quot;location&quot; = &apos;LOC-02&apos;) and (&quot;sku&quot; = &apos;SKU-01&apos;) for update 
	Process 8193: select &quot;id&quot;, &quot;sku&quot;, &quot;qty&quot;, &quot;location&quot; from &quot;inventory_single&quot; where (&quot;location&quot; = &apos;LOC-01&apos;) and (&quot;sku&quot; = &apos;SKU-01&apos;) for update </code></pre><h3 id="how-do-we-fix-this">How Do We Fix This?</h3><p>As described in the documentation for PostgreSQL <a href="https://www.postgresql.org/docs/9.5/explicit-locking.html?ref=honstain.com#LOCKING-DEADLOCKS">https://www.postgresql.org/docs/9.5/explicit-locking.html#LOCKING-DEADLOCKS</a></p><blockquote>The best defense against deadlocks is generally to avoid them by being certain that all applications using a database acquire locks on multiple objects in a consistent order.</blockquote><p>How would we go about acquiring our locks in a consistent order? In our code, the first lock we take is the destination location (note the ordering of our current operations in our transfer DAO method is fairly arbitrary).</p><!--kg-card-begin: markdown--><pre><code class="language-scala">  def transfer(db: PostgresProfile.backend.DatabaseDef,
               sku: String,
               qty: Int,
               fromLocation: String,
               toLocation: String,
               userId: String
              ): Future[Int] = {

    val insert = for {
      toRecord &lt;- {
        this.filter(x =&gt; x.location === toLocation &amp;&amp; x.sku === sku).forUpdate.result.headOption
      }
      fromRecord &lt;- {
        this.filter(x =&gt; x.location === fromLocation &amp;&amp; x.sku === sku).forUpdate.result.headOption
      }
</code></pre>
<!--kg-card-end: markdown--><p>What if we added a new step at the very beginning that was only responsible for taking out database locks in a consistent order?</p><p>We can start by always taking our first lock on the smallest (by ordering of the location string) location and our second lock on the largest.</p><!--kg-card-begin: markdown--><pre><code class="language-scala">val locations = List(fromLocation, toLocation)
val firstLocationToLock = locations.min
val secondLocationToLock = locations.max
</code></pre>
<!--kg-card-end: markdown--><p>Then add two more queries to our Slick for-comprehension. In this case, we can ignore the response, I leave it as an exercise to the reader if they want to reorganize this Slick query into only 4 database queries (I have chosen to be wasteful in an effort to be simple to understand for the purpose of illustration).</p><!--kg-card-begin: markdown--><pre><code class="language-scala">    val locations = Seq(fromLocation, toLocation)
    val firstLocationToLock = locations.min
    val secondLocationToLock = locations.max

    val insert = for {
      _ &lt;- {
        this.filter(x =&gt; x.location === firstLocationToLock &amp;&amp; x.sku === sku).forUpdate.result.headOption
      }
      _ &lt;- {
        this.filter(x =&gt; x.location === secondLocationToLock &amp;&amp; x.sku === sku).forUpdate.result.headOption
      }
      toRecord &lt;- {
        this.filter(x =&gt; x.location === toLocation &amp;&amp; x.sku === sku).result.headOption
      }
      fromRecord &lt;- {
        this.filter(x =&gt; x.location === fromLocation &amp;&amp; x.sku === sku).result.headOption
      }
</code></pre>
<!--kg-card-end: markdown--><p>With our changes, lets make sure the automated tests still pass and then attempt to do concurrent transfers.</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2019/04/image-15.png" class="kg-image" alt="Inventory Transfer Consistently" loading="lazy"></figure><p>These results look much more promising than our previous attempts. You should review the Scalatra service logs to confirm the result. We can review the PostgreSQL query logs to see if the queries are matching our expectations.</p><pre><code class="language-text">2019-04-18 08:08:13.925 PDT [11853] toyinventory@toyinventory LOG:  execute S_2: BEGIN
2019-04-18 08:08:13.925 PDT [11853] toyinventory@toyinventory LOG:  execute S_3: select &quot;id&quot;, &quot;sku&quot;, &quot;qty&quot;, &quot;location&quot; from &quot;inventory_single&quot; where (&quot;location&quot; = &apos;LOC-01&apos;) and (&quot;sku&quot; = &apos;SKU-01&apos;) for update 
2019-04-18 08:08:13.930 PDT [11853] toyinventory@toyinventory LOG:  execute S_4: select &quot;id&quot;, &quot;sku&quot;, &quot;qty&quot;, &quot;location&quot; from &quot;inventory_single&quot; where (&quot;location&quot; = &apos;LOC-02&apos;) and (&quot;sku&quot; = &apos;SKU-01&apos;) for update 
2019-04-18 08:08:13.933 PDT [11853] toyinventory@toyinventory LOG:  execute S_6: select &quot;id&quot;, &quot;sku&quot;, &quot;qty&quot;, &quot;location&quot; from &quot;inventory_single&quot; where (&quot;location&quot; = &apos;LOC-01&apos;) and (&quot;sku&quot; = &apos;SKU-01&apos;)
2019-04-18 08:08:13.956 PDT [11853] toyinventory@toyinventory LOG:  execute S_5: select &quot;id&quot;, &quot;sku&quot;, &quot;qty&quot;, &quot;location&quot; from &quot;inventory_single&quot; where (&quot;location&quot; = &apos;LOC-02&apos;) and (&quot;sku&quot; = &apos;SKU-01&apos;)
2019-04-18 08:08:13.959 PDT [11853] toyinventory@toyinventory LOG:  execute S_8: update &quot;inventory_single&quot; set &quot;qty&quot; = $1 where (&quot;inventory_single&quot;.&quot;location&quot; = &apos;LOC-01&apos;) and (&quot;inventory_single&quot;.&quot;sku&quot; = &apos;SKU-01&apos;)
2019-04-18 08:08:13.959 PDT [11853] toyinventory@toyinventory DETAIL:  parameters: $1 = &apos;2&apos;
2019-04-18 08:08:13.963 PDT [11853] toyinventory@toyinventory LOG:  execute S_7: update &quot;inventory_single&quot; set &quot;qty&quot; = $1 where (&quot;inventory_single&quot;.&quot;location&quot; = &apos;LOC-02&apos;) and (&quot;inventory_single&quot;.&quot;sku&quot; = &apos;SKU-01&apos;)
2019-04-18 08:08:13.963 PDT [11853] toyinventory@toyinventory DETAIL:  parameters: $1 = &apos;0&apos;
2019-04-18 08:08:13.964 PDT [11853] toyinventory@toyinventory LOG:  execute S_1: COMMIT</code></pre><p>This is pretty significant progress if you compare these results to our initial implementation. At the cost of row level locking, we can leverage our database to provide a pretty consistent experience for the caller.</p><h2 id="summary">Summary</h2><p>In this and the previous post we have worked our way from a very basic solution to one that makes some trade-offs with locking but is able to provide some useful consistency. There are a number of ways to approach this problem, and we have made an effort to explore several (including some less successful attempts like utilizing a more aggressive isolation level).</p><p>Hopefully, you found this useful if you were looking for examples designing systems with relational databases or applications of Scala and Slick.</p>]]></content:encoded></item><item><title><![CDATA[Inventory Transfer Consistently]]></title><description><![CDATA[<p>In our <a href="https://honstain.com/inventory-management-transfer-start/">previous post</a>, we designed a basic system to track the physical transfer of goods between two physical locations.</p><p>Previous posts for our inventory service:</p><ul><li>Part 1 - <a href="https://honstain.com/scalatra-inventory-management-service/">Creating a Scalatra Inventory Management Service</a></li><li>Part 2 - <a href="https://honstain.com/slick-upsert-and-select/">Implementing Create/Update in Slick</a></li><li>Part 3 - <a href="https://honstain.com/inventory-management-transfer-start/">Inventory Management Transfer</a></li></ul><p><strong>WARNING</strong></p>]]></description><link>https://honstain.com/inventory-transfer-row-locking/</link><guid isPermaLink="false">65b526ba7a5d430e36b8ec02</guid><category><![CDATA[Slick]]></category><category><![CDATA[Scalatra]]></category><category><![CDATA[Scala]]></category><category><![CDATA[PostgreSQL]]></category><dc:creator><![CDATA[Anthony Honstain]]></dc:creator><pubDate>Thu, 18 Apr 2019 15:39:46 GMT</pubDate><media:content url="https://honstain.com/content/images/2019/04/SlickRowLevelLocking.PNG" medium="image"/><content:encoded><![CDATA[<img src="https://honstain.com/content/images/2019/04/SlickRowLevelLocking.PNG" alt="Inventory Transfer Consistently"><p>In our <a href="https://honstain.com/inventory-management-transfer-start/">previous post</a>, we designed a basic system to track the physical transfer of goods between two physical locations.</p><p>Previous posts for our inventory service:</p><ul><li>Part 1 - <a href="https://honstain.com/scalatra-inventory-management-service/">Creating a Scalatra Inventory Management Service</a></li><li>Part 2 - <a href="https://honstain.com/slick-upsert-and-select/">Implementing Create/Update in Slick</a></li><li>Part 3 - <a href="https://honstain.com/inventory-management-transfer-start/">Inventory Management Transfer</a></li></ul><p><strong>WARNING </strong>- This may be difficult to follow if you haven&apos;t been working through the previous posts. Reviewing this repository that implements this material covered so far may help you <a href="https://bitbucket.org/honstain/scalatra-single-record-transfer-service/src/master/?ref=honstain.com">https://bitbucket.org/honstain/scalatra-single-record-transfer-service/src/master/</a></p><p>I created a basic diagram to help better visualize the sort of relationship and model we are dealing with in this walk-through.</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2019/04/image-10.png" class="kg-image" alt="Inventory Transfer Consistently" loading="lazy"></figure><h2 id="enforcing-consistency-in-the-database">Enforcing Consistency in the Database</h2><p>When we left off in our <a href="https://honstain.com/inventory-management-transfer-start/">previous post</a> we had a solution that worked for the simple happy path, but would quickly manifest undesirable behavior under mild concurrency. </p><p>Our initial attempts focused on using a more restricted database isolation level. But as we saw from our testing, it exposed a lot of errors to the client (while using Serializable the database would detect and abort our transactions). We could attempt to retry, but we would need to think through any approach with retry holistically to make sure retry was still valid given the new state of the database.</p><p>Possible options consider going forward:</p><!--kg-card-begin: markdown--><ul>
<li>Retry our database serialization failures (if using the Serializable isolation level).
<ul>
<li>Remember that there are non-trivial performance implications to serializing, but like with anything it would be to your benefit to experiment and make an informed decision for your use case and read patterns.</li>
</ul>
</li>
<li>Pessimistic row level locking - we use the database to block access specific rows while we execute our logic and do the update. <a href="https://www.postgresql.org/docs/9.0/sql-select.html?ref=honstain.com#SQL-FOR-UPDATE-SHARE">https://www.postgresql.org/docs/9.0/sql-select.html#SQL-FOR-UPDATE-SHARE</a></li>
<li>Optimistic offline lock - by assuming the risk of conflict is low (especially if multiple services are involved) we rely on a revision id and some additional logic in the client/callers identify potential issues.
<ul>
<li>Martin Flower has a great reference <a href="https://martinfowler.com/eaaCatalog/optimisticOfflineLock.html?ref=honstain.com">https://martinfowler.com/eaaCatalog/optimisticOfflineLock.html</a></li>
</ul>
</li>
</ul>
<!--kg-card-end: markdown--><p>These are just a few directions you could go, you could even attempt to circumvent this problem entirely with some alternative technology solutions. Because this series of blog posts are centered around Scala, relational databases, and microservices, I will focus our exploration ways to work the problem using traditional database techniques.</p><h2 id="row-level-locking">Row Level Locking</h2><p>Our next attempt will be to use row level locking to help us prevent multiple concurrent users from conflicting with one another.</p><p>A excellent reference to review before continuing would be to review the PostgreSQL documentation on select for update SQL. <a href="https://www.postgresql.org/docs/9.0/sql-select.html?ref=honstain.com#SQL-FOR-UPDATE-SHARE">https://www.postgresql.org/docs/9.0/sql-select.html#SQL-FOR-UPDATE-SHARE</a></p><p>Slick provides us access to PostgreSQL row-level locking with the <code>forUpdate</code> method on the Query class.</p><!--kg-card-begin: markdown--><ul>
<li>Before we add row level locking - this is our original transfer function:</li>
</ul>
<pre><code class="language-scala">  def transfer(db: PostgresProfile.backend.DatabaseDef,
               sku: String,
               qty: Int,
               fromLocation: String,
               toLocation: String
              ): Future[Int] = {

    val insert = for {
      toRecord &lt;- {
        this.filter(x =&gt; x.location === toLocation &amp;&amp; x.sku === sku).result.headOption
      }
      fromRecord &lt;- {
        this.filter(x =&gt; x.location === fromLocation &amp;&amp; x.sku === sku).result.headOption
      }
      createUpdateDestination &lt;- {
        toRecord match {
          case Some(InventorySingleRecord(_, `sku`, destQty, `toLocation`)) =&gt;
            // Update
            val q = for { x &lt;- this if x.location === toLocation &amp;&amp; x.sku === sku } yield x.qty
            q.update(destQty + qty)
          case _ =&gt;
            // Create - this is likely susceptible to write skew
            this += InventorySingleRecord(Option.empty, sku, qty, toLocation)
        }
      }
      updateSource &lt;- {
        fromRecord match {
          case Some(InventorySingleRecord(_, `sku`, srcQty, `fromLocation`)) =&gt;

            val destinationQty: Int = if (toRecord.isDefined) toRecord.get.qty else 0
            logger.debug(s&quot;Transfer $qty from:$fromLocation (had qty:$srcQty) to $toLocation (had qty:$destinationQty)&quot;)

            val q = for { x &lt;- this if x.location === fromLocation &amp;&amp; x.sku === sku } yield x.qty
            q.update(srcQty - qty)
          case _ =&gt;
            DBIO.failed(new Exception(&quot;Failed to find source location&quot;))
        }
      }
    } yield updateSource
    db.run(insert.transactionally)
  }
</code></pre>
<ul>
<li>After we just need to add forUpdate to the initial select queries.</li>
</ul>
<pre><code class="language-scala">      toRecord &lt;- {
        this.filter(x =&gt; x.location === toLocation &amp;&amp; x.sku === sku).forUpdate.result.headOption
      }
      fromRecord &lt;- {
        this.filter(x =&gt; x.location === fromLocation &amp;&amp; x.sku === sku).forUpdate.result.headOption
      }
</code></pre>
<!--kg-card-end: markdown--><p>Let&apos;s test this change by trying to run more than one transfer in parallel. We will use the same approach from our <a href="https://honstain.com/inventory-management-transfer-start/">previous post (creating the initial transfer functionality)</a>, where we used the Linux command line tool siege.</p><pre><code class="language-text">siege -v -c2 -r10 --content-type &quot;application/json&quot; -f siege_urls.txt</code></pre><p>Please make sure your database data looks like the following:</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2019/04/image-11.png" class="kg-image" alt="Inventory Transfer Consistently" loading="lazy"></figure><p>The results of our concurrent requests looked like this (we had a few failures).</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2019/04/image-13.png" class="kg-image" alt="Inventory Transfer Consistently" loading="lazy"></figure><p>Looking to the logs from Scalatra, you can see PostgreSQL encountered a deadlock, looking at the response times also shows how painful the deadlock was to identify for the database.</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2019/04/image-14.png" class="kg-image" alt="Inventory Transfer Consistently" loading="lazy"></figure><p>At some point the requests overlapped in such a way that our fictitious user 929 had taken a lock on <code>LOC-01</code> and then wanted to lock <code>LOC-02</code> only to find that user 629 already had a lock on <code>LOC-02</code> and wanted <code>LOC-01</code>. It would be a good idea to review the PostgreSQL documentation on deadlocks <a href="https://www.postgresql.org/docs/9.5/explicit-locking.html?ref=honstain.com">https://www.postgresql.org/docs/9.5/explicit-locking.html</a>. I found it helpful to turn on query logging in PostgreSQL <a href="https://stackoverflow.com/questions/722221/how-to-log-postgresql-queries?ref=honstain.com">https://stackoverflow.com/questions/722221/how-to-log-postgresql-queries</a> if you want even more information about what is happening.</p><pre><code class="language-text">2019-04-18 07:22:23.790 PDT [8195] toyinventory@toyinventory ERROR:  deadlock detected
2019-04-18 07:22:23.790 PDT [8195] toyinventory@toyinventory DETAIL:  Process 8195 waits for ShareLock on transaction 14078; blocked by process 8193.
	Process 8193 waits for ShareLock on transaction 14079; blocked by process 8195.
	Process 8195: select &quot;id&quot;, &quot;sku&quot;, &quot;qty&quot;, &quot;location&quot; from &quot;inventory_single&quot; where (&quot;location&quot; = &apos;LOC-02&apos;) and (&quot;sku&quot; = &apos;SKU-01&apos;) for update 
	Process 8193: select &quot;id&quot;, &quot;sku&quot;, &quot;qty&quot;, &quot;location&quot; from &quot;inventory_single&quot; where (&quot;location&quot; = &apos;LOC-01&apos;) and (&quot;sku&quot; = &apos;SKU-01&apos;) for update </code></pre><h3 id="how-do-we-fix-this">How Do We Fix This?</h3><p>As described in the documentation for PostgreSQL <a href="https://www.postgresql.org/docs/9.5/explicit-locking.html?ref=honstain.com#LOCKING-DEADLOCKS">https://www.postgresql.org/docs/9.5/explicit-locking.html#LOCKING-DEADLOCKS</a></p><blockquote>The best defense against deadlocks is generally to avoid them by being certain that all applications using a database acquire locks on multiple objects in a consistent order.</blockquote><p>How would we go about acquiring our locks in a consistent order? In our code, the first lock we take is the destination location (note the ordering of our current operations in our transfer DAO method is fairly arbitrary).</p><!--kg-card-begin: markdown--><pre><code class="language-scala">  def transfer(db: PostgresProfile.backend.DatabaseDef,
               sku: String,
               qty: Int,
               fromLocation: String,
               toLocation: String,
               userId: String
              ): Future[Int] = {

    val insert = for {
      toRecord &lt;- {
        this.filter(x =&gt; x.location === toLocation &amp;&amp; x.sku === sku).forUpdate.result.headOption
      }
      fromRecord &lt;- {
        this.filter(x =&gt; x.location === fromLocation &amp;&amp; x.sku === sku).forUpdate.result.headOption
      }
</code></pre>
<!--kg-card-end: markdown--><p>What if we added a new step at the very beginning that was only responsible for taking out database locks in a consistent order?</p><p>We can start by always taking our first lock on the smallest (by ordering of the location string) location and our second lock on the largest.</p><!--kg-card-begin: markdown--><pre><code class="language-scala">val locations = List(fromLocation, toLocation)
val firstLocationToLock = locations.min
val secondLocationToLock = locations.max
</code></pre>
<!--kg-card-end: markdown--><p>Then add two more queries to our Slick for-comprehension. In this case, we can ignore the response, I leave it as an exercise to the reader if they want to reorganize this Slick query into only 4 database queries (I have chosen to be wasteful in an effort to be simple to understand for the purpose of illustration).</p><!--kg-card-begin: markdown--><pre><code class="language-scala">    val locations = Seq(fromLocation, toLocation)
    val firstLocationToLock = locations.min
    val secondLocationToLock = locations.max

    val insert = for {
      _ &lt;- {
        this.filter(x =&gt; x.location === firstLocationToLock &amp;&amp; x.sku === sku).forUpdate.result.headOption
      }
      _ &lt;- {
        this.filter(x =&gt; x.location === secondLocationToLock &amp;&amp; x.sku === sku).forUpdate.result.headOption
      }
      toRecord &lt;- {
        this.filter(x =&gt; x.location === toLocation &amp;&amp; x.sku === sku).result.headOption
      }
      fromRecord &lt;- {
        this.filter(x =&gt; x.location === fromLocation &amp;&amp; x.sku === sku).result.headOption
      }
</code></pre>
<!--kg-card-end: markdown--><p>With our changes, lets make sure the automated tests still pass and then attempt to do concurrent transfers.</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2019/04/image-15.png" class="kg-image" alt="Inventory Transfer Consistently" loading="lazy"></figure><p>These results look much more promising than our previous attempts. You should review the Scalatra service logs to confirm the result. We can review the PostgreSQL query logs to see if the queries are matching our expectations.</p><pre><code class="language-text">2019-04-18 08:08:13.925 PDT [11853] toyinventory@toyinventory LOG:  execute S_2: BEGIN
2019-04-18 08:08:13.925 PDT [11853] toyinventory@toyinventory LOG:  execute S_3: select &quot;id&quot;, &quot;sku&quot;, &quot;qty&quot;, &quot;location&quot; from &quot;inventory_single&quot; where (&quot;location&quot; = &apos;LOC-01&apos;) and (&quot;sku&quot; = &apos;SKU-01&apos;) for update 
2019-04-18 08:08:13.930 PDT [11853] toyinventory@toyinventory LOG:  execute S_4: select &quot;id&quot;, &quot;sku&quot;, &quot;qty&quot;, &quot;location&quot; from &quot;inventory_single&quot; where (&quot;location&quot; = &apos;LOC-02&apos;) and (&quot;sku&quot; = &apos;SKU-01&apos;) for update 
2019-04-18 08:08:13.933 PDT [11853] toyinventory@toyinventory LOG:  execute S_6: select &quot;id&quot;, &quot;sku&quot;, &quot;qty&quot;, &quot;location&quot; from &quot;inventory_single&quot; where (&quot;location&quot; = &apos;LOC-01&apos;) and (&quot;sku&quot; = &apos;SKU-01&apos;)
2019-04-18 08:08:13.956 PDT [11853] toyinventory@toyinventory LOG:  execute S_5: select &quot;id&quot;, &quot;sku&quot;, &quot;qty&quot;, &quot;location&quot; from &quot;inventory_single&quot; where (&quot;location&quot; = &apos;LOC-02&apos;) and (&quot;sku&quot; = &apos;SKU-01&apos;)
2019-04-18 08:08:13.959 PDT [11853] toyinventory@toyinventory LOG:  execute S_8: update &quot;inventory_single&quot; set &quot;qty&quot; = $1 where (&quot;inventory_single&quot;.&quot;location&quot; = &apos;LOC-01&apos;) and (&quot;inventory_single&quot;.&quot;sku&quot; = &apos;SKU-01&apos;)
2019-04-18 08:08:13.959 PDT [11853] toyinventory@toyinventory DETAIL:  parameters: $1 = &apos;2&apos;
2019-04-18 08:08:13.963 PDT [11853] toyinventory@toyinventory LOG:  execute S_7: update &quot;inventory_single&quot; set &quot;qty&quot; = $1 where (&quot;inventory_single&quot;.&quot;location&quot; = &apos;LOC-02&apos;) and (&quot;inventory_single&quot;.&quot;sku&quot; = &apos;SKU-01&apos;)
2019-04-18 08:08:13.963 PDT [11853] toyinventory@toyinventory DETAIL:  parameters: $1 = &apos;0&apos;
2019-04-18 08:08:13.964 PDT [11853] toyinventory@toyinventory LOG:  execute S_1: COMMIT</code></pre><p>This is pretty significant progress if you compare these results to our initial implementation. At the cost of row level locking, we can leverage our database to provide a pretty consistent experience for the caller.</p><h2 id="summary">Summary</h2><p>In this and the previous post we have worked our way from a very basic solution to one that makes some trade-offs with locking but is able to provide some useful consistency. There are a number of ways to approach this problem, and we have made an effort to explore several (including some less successful attempts like utilizing a more aggressive isolation level).</p><p>Hopefully, you found this useful if you were looking for examples designing systems with relational databases or applications of Scala and Slick.</p>]]></content:encoded></item><item><title><![CDATA[Inventory Management Transfer]]></title><description><![CDATA[<p>Continuing in our series of posts about creating a basic Scalatra service for managing inventory, we would now like to implement the persistence logic to transfer inventory from one location to another.</p><p>Previous posts for our inventory service:</p><ul><li><a href="https://honstain.com/scalatra-inventory-management-service/">Creating a Scalatra Inventory Management Service</a></li><li><a href="https://honstain.com/slick-upsert-and-select/">Implementing Create/Update in Slick</a></li></ul><p><strong>WARNING</strong></p>]]></description><link>https://honstain.com/inventory-management-transfer-start-2/</link><guid isPermaLink="false">65b52aaf7a5d430e36b8ec8b</guid><category><![CDATA[Scala]]></category><category><![CDATA[Scalatra]]></category><category><![CDATA[PostgreSQL]]></category><category><![CDATA[Slick]]></category><dc:creator><![CDATA[Anthony Honstain]]></dc:creator><pubDate>Sat, 13 Apr 2019 22:11:34 GMT</pubDate><media:content url="https://honstain.com/content/images/2019/04/DB_Inventory_Transfer_Dirty.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://honstain.com/content/images/2019/04/DB_Inventory_Transfer_Dirty.jpg" alt="Inventory Management Transfer"><p>Continuing in our series of posts about creating a basic Scalatra service for managing inventory, we would now like to implement the persistence logic to transfer inventory from one location to another.</p><p>Previous posts for our inventory service:</p><ul><li><a href="https://honstain.com/scalatra-inventory-management-service/">Creating a Scalatra Inventory Management Service</a></li><li><a href="https://honstain.com/slick-upsert-and-select/">Implementing Create/Update in Slick</a></li></ul><p><strong>WARNING </strong>- This may be difficult to follow if you haven&apos;t been working through the previous posts. Reviewing this repository that implements this material covered so far may help you <a href="https://bitbucket.org/honstain/scalatra-single-record-transfer-service/src/master/?ref=honstain.com">https://bitbucket.org/honstain/scalatra-single-record-transfer-service/src/master/</a></p><h2 id="transfer-inventory-between-locations">Transfer Inventory Between Locations</h2><p>NOTE - I am going to intentionally start with naive implementation and demonstrate testing it (exposing issues). We will iterate on the design together.</p><p>Let&apos;s establish some basic requirements or expectations for this logic. We may initially relax some of these requirements to illustrative purposes (but this is where we are going).</p><ul><li>The source location must exist when attempting to transfer a SKU+qty from a source location to a destination location.</li><li>The source location has enough inventory to supply the requested qty being transferred. Said another way, we will not support going negative.</li></ul><p>To illustrate, if we wanted to transfer 1 unit of &apos;SKU-01&apos; from location &apos;LOC-01&apos; to &apos;LOC-02&apos;.</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2019/04/image.png" class="kg-image" alt="Inventory Management Transfer" loading="lazy"></figure><p>We will want to remove one from &apos;LOC-01&apos; and then increase the quantity of &apos;LOC-02&apos;. An initial test to help us validate this behavior for our DAO might look like:</p><!--kg-card-begin: markdown--><pre><code class="language-scala">def createInventoryHelper(sku: String, qty: Int, location: String): InventorySingleRecord = {
  val create = InventorySingleRecordDao.create(database, sku, qty, location)
  Await.result(create, Duration.Inf).get
}
  
test(&quot;transfer&quot;) {
  createInventoryHelper(TEST_SKU, 1, BIN_01)
  createInventoryHelper(TEST_SKU, 0, BIN_02)

  val futureTrans = InventorySingleRecordDao.transfer(database, TEST_SKU, 1, BIN_01, BIN_02)
  Await.result(futureTrans, Duration.Inf)

  val futureFind = InventorySingleRecordDao.findAll(database)
  val findResult: Seq[InventorySingleRecord] = Await.result(futureFind, Duration.Inf)

  findResult should contain only (
    InventorySingleRecord(Some(1), &quot;NewSku&quot;, 0, BIN_01),
    InventorySingleRecord(Some(2), &quot;NewSku&quot;, 1, BIN_02),
  )
}
</code></pre>
<!--kg-card-end: markdown--><p>Now to implement the <code>InventorySingleRecordDao.transfer</code> logic we can reference the upsert logic from our <a href="https://honstain.com/slick-upsert-and-select/">previous post</a> that uses the Scala for comprehension and a database transaction. We can start by just modifying the source location and ignore the destination location (the test won&apos;t pass, but we will be able to validate our transfer logic incrementally).</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2019/04/image-1.png" class="kg-image" alt="Inventory Management Transfer" loading="lazy"></figure><p>This gives us a basic overview of the transfer function in our DAO. We have a <code>for</code> comprehension with the query to retrieve the database record for the source (fromLocation) and then attempt to modify it (there is basic handling to address the possibility that we don&apos;t find the source record.</p><!--kg-card-begin: markdown--><pre><code class="language-scala">def transfer(db: PostgresProfile.backend.DatabaseDef,
             sku: String,
             qty: Int,
             fromLocation: String,
             toLocation: String
            ): Future[Int] = {

  val insert = for {
    fromRecord &lt;- {
      this.filter(x =&gt; x.location === fromLocation &amp;&amp; x.sku === sku).result.headOption
    }
    updateSource &lt;- {
      fromRecord match {
        case Some(InventorySingleRecord(_, `sku`, srcQty, `fromLocation`)) =&gt;
          val q = for { x &lt;- this if x.location === fromLocation &amp;&amp; x.sku === sku } yield x.qty
          q.update(srcQty - qty)
        case _ =&gt;
          DBIO.failed(new Exception(&quot;Failed to find source location&quot;))
      }
    }
  } yield updateSource
  db.run(insert.transactionally)
}
</code></pre>
<!--kg-card-end: markdown--><p>Running the test that just created, we will get a failure, but hopefully, be able to see that the source &apos;Bin-01&apos; is decremented.</p><pre><code class="language-text">Vector(
	InventorySingleRecord(Some(2),NewSku,0,Bin-02),
	InventorySingleRecord(Some(1),NewSku,0,Bin-01))
did not contain only (
	InventorySingleRecord(Some(1),NewSku,0,Bin-01), 
	InventorySingleRecord(Some(2),NewSku,1,Bin-02))</code></pre><p><strong>Now we need to update the destination record.</strong></p><p>Just like when we updated the source location, we will want to query for it, but it is more likely that this record does not already exist (at least in this example where we assume physical inventory already exists in a source location, but its much more likely that the sku+location combination of the destination is unknown to the database - remember that those two columns are integral to our design).</p><!--kg-card-begin: markdown--><pre><code class="language-scala">toRecord &lt;- {
  this.filter(x =&gt; x.location === toLocation &amp;&amp; x.sku === sku).result.headOption
}
</code></pre>
<!--kg-card-end: markdown--><p>Now that we possibly have a record for the destination, we can try to update or create. This should be very familiar if you worked through the <a href="https://honstain.com/slick-upsert-and-select/">upsert logic in my previous post</a>.</p><!--kg-card-begin: markdown--><pre><code class="language-scala">createUpdateDestination &lt;- {
  toRecord match {
    case Some(InventorySingleRecord(_, `sku`, destQty, `toLocation`)) =&gt;
      // Update
      logger.debug(s&quot;Transfer from:$fromLocation to $toLocation found $destQty in destination&quot;)
      val q = for { x &lt;- this if x.location === toLocation &amp;&amp; x.sku === sku } yield x.qty
      q.update(destQty + qty)
    case _ =&gt;
      // Create - this is likely susceptible to write skew
      this += InventorySingleRecord(Option.empty, sku, qty, toLocation)
  }
}
</code></pre>
<!--kg-card-end: markdown--><p>Hopefully, with these pieces all put together you get a passing test.</p><h3 id="current-code-for-a-very-basic-transfer-of-inventory">Current Code for a Very Basic Transfer of Inventory</h3><!--kg-card-begin: markdown--><pre><code class="language-scala">def transfer(db: PostgresProfile.backend.DatabaseDef,
             sku: String,
             qty: Int,
             fromLocation: String,
             toLocation: String
            ): Future[Int] = {

  val insert = for {
    toRecord &lt;- {
      this.filter(x =&gt; x.location === toLocation &amp;&amp; x.sku === sku).result.headOption
    }
    fromRecord &lt;- {
      this.filter(x =&gt; x.location === fromLocation &amp;&amp; x.sku === sku).result.headOption
    }
    createUpdateDestination &lt;- {
      toRecord match {
        case Some(InventorySingleRecord(_, `sku`, destQty, `toLocation`)) =&gt;
          // Update
          logger.debug(s&quot;Transfer from:$fromLocation to $toLocation found $destQty in destination&quot;)
          val q = for { x &lt;- this if x.location === toLocation &amp;&amp; x.sku === sku } yield x.qty
          q.update(destQty + qty)
        case _ =&gt;
          this += InventorySingleRecord(Option.empty, sku, qty, toLocation)
      }
    }
    updateSource &lt;- {
      fromRecord match {
        case Some(InventorySingleRecord(_, `sku`, srcQty, `fromLocation`)) =&gt;
          val q = for { x &lt;- this if x.location === fromLocation &amp;&amp; x.sku === sku } yield x.qty
          q.update(srcQty - qty)
        case _ =&gt;
          DBIO.failed(new Exception(&quot;Failed to find source location&quot;))
      }
    }
  } yield updateSource
  db.run(insert.transactionally)
}
</code></pre>
<!--kg-card-end: markdown--><h2 id="exposing-the-flaws-in-this-initial-design">Exposing the Flaws in this Initial Design</h2><p>At this stage we hopefully have a basic DAO that implements the following.</p><!--kg-card-begin: markdown--><pre><code class="language-scala">def findAll(db: PostgresProfile.backend.DatabaseDef): Future[Seq[InventorySingleRecord]]

def create(db: PostgresProfile.backend.DatabaseDef,
           sku: String,
           qty: Int,
           location: String
          ): Future[Option[InventorySingleRecord]]

def transfer(db: PostgresProfile.backend.DatabaseDef,
             sku: String,
             qty: Int,
             fromLocation: String,
             toLocation: String
            ): Future[Int]
</code></pre>
<!--kg-card-end: markdown--><p>I created a repository with a working Scala Scalatra service to snapshot this stage of the development:</p><ul><li><a href="https://bitbucket.org/honstain/scalatra-single-record-transfer-service/src/master/?ref=honstain.com">https://bitbucket.org/honstain/scalatra-single-record-transfer-service/src/master/</a></li></ul><p>This repo provides you an example of Scalatra service that exposes our DAO operations via a set of crude HTTP endpoints. The README.md provides some points on starting the service. If this is unfamiliar or you would like a refresher, I suggest reviewing the previous post where I covered just this aspect of Scalatra <a href="https://honstain.com/rest-in-a-scalatra-service/">http://honstain.com/rest-in-a-scalatra-service/</a>.</p><p>Before beginning this next step, I suggest you verify that you have a service that runs, with a database, and can respond to HTTP requests.</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2019/04/image-2.png" class="kg-image" alt="Inventory Management Transfer" loading="lazy"></figure><p>Using a tool like Postman to transfer inventory between LOC-01 and LOC-02 you may notice that doesn&apos;t enforce any constraints yet, it can result in negative quantities and moving negative amounts. What we are really interested in how consistent the database will be with our current DAO queries.</p><h3 id="exposing-consistency-problems">Exposing Consistency Problems</h3><p>One way to test our system might be to attempt to move inventory in parallel (for the same SKU and location group). Let&apos;s start with just two locations and a single SKU, trying to move back and forth between LOC-01 and LOC-02 should be enough to expose an issue.</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2019/04/image-3.png" class="kg-image" alt="Inventory Management Transfer" loading="lazy"></figure><p>There are several tools that you could use for load and performance testing. You could even write your own script or test. I have opted to use <code>siege</code> which is a common command line tool you can probably retrieve from your Linux package manager (<a href="https://www.joedog.org/siege-home/?ref=honstain.com">https://www.joedog.org/siege-home/</a>). By defining a set of URLs we will use siege to execute the following in parallel:</p><ul><li>Get all the inventory</li><li>Transfer 1 qty of SKU <code>SKU-01</code> from <code>LOC-01</code> to <code>LOC-02</code></li><li>Transfer 1 qty of SKU <code>SKU-01</code> from <code>LOC-02</code> to <code>LOC-01</code></li></ul><!--kg-card-begin: markdown--><pre><code class="language-text"># siege_urls.txt - a urls file https://www.joedog.org/siege-manual/#a05

127.0.0.1:8080/

127.0.0.1:8080/transfer POST {&quot;sku&quot;: &quot;SKU-01&quot;,&quot;qty&quot;: 1,&quot;fromLocation&quot;: &quot;LOC-01&quot;,&quot;toLocation&quot;: &quot;LOC-02&quot;}

127.0.0.1:8080/transfer POST {&quot;sku&quot;: &quot;SKU-01&quot;,&quot;qty&quot;: 1,&quot;fromLocation&quot;: &quot;LOC-02&quot;,&quot;toLocation&quot;: &quot;LOC-01&quot;}
</code></pre>
<!--kg-card-end: markdown--><p>This command will start siege running with the following arguments:</p><ul><li><code>-v</code> verbose mode</li><li><code>-c2</code> 2 concurrent requests</li><li><code>-r10</code> run the test 10 times</li><li><code>--content-type</code> specifying the content type for our API</li><li><code>-f</code> the URLs file</li></ul><p><code>siege -v -c2 -r10 --content-type &quot;application/json&quot; -f siege_urls.txt</code></p><!--kg-card-begin: markdown--><pre><code class="language-text">** SIEGE 4.0.4
** Preparing 2 concurrent users for battle.
The server is now under siege...
HTTP/1.1 200     0.01 secs:      91 bytes ==&gt; GET  /
HTTP/1.1 200     0.07 secs:       0 bytes ==&gt; POST http://127.0.0.1:8080/transfer
HTTP/1.1 200     0.06 secs:       0 bytes ==&gt; POST http://127.0.0.1:8080/transfer
HTTP/1.1 200     0.04 secs:       0 bytes ==&gt; POST http://127.0.0.1:8080/transfer
HTTP/1.1 200     0.00 secs:      91 bytes ==&gt; GET  /
HTTP/1.1 200     0.03 secs:       0 bytes ==&gt; POST http://127.0.0.1:8080/transfer
HTTP/1.1 200     0.02 secs:      91 bytes ==&gt; GET  /
HTTP/1.1 200     0.04 secs:       0 bytes ==&gt; POST http://127.0.0.1:8080/transfer
HTTP/1.1 200     0.04 secs:       0 bytes ==&gt; POST http://127.0.0.1:8080/transfer
HTTP/1.1 200     0.04 secs:       0 bytes ==&gt; POST http://127.0.0.1:8080/transfer
HTTP/1.1 200     0.01 secs:      91 bytes ==&gt; GET  /
HTTP/1.1 200     0.07 secs:       0 bytes ==&gt; POST http://127.0.0.1:8080/transfer
HTTP/1.1 200     0.01 secs:      92 bytes ==&gt; GET  /
HTTP/1.1 200     0.06 secs:       0 bytes ==&gt; POST http://127.0.0.1:8080/transfer
HTTP/1.1 200     0.05 secs:       0 bytes ==&gt; POST http://127.0.0.1:8080/transfer
HTTP/1.1 200     0.07 secs:       0 bytes ==&gt; POST http://127.0.0.1:8080/transfer
HTTP/1.1 200     0.01 secs:      91 bytes ==&gt; GET  /
HTTP/1.1 200     0.05 secs:       0 bytes ==&gt; POST http://127.0.0.1:8080/transfer
HTTP/1.1 200     0.01 secs:      92 bytes ==&gt; GET  /
HTTP/1.1 200     0.04 secs:       0 bytes ==&gt; POST http://127.0.0.1:8080/transfer

Transactions:		          20 hits
Availability:		      100.00 %
Elapsed time:		        0.38 secs
Data transferred:	        0.00 MB
Response time:		        0.04 secs
Transaction rate:	       52.63 trans/sec
Throughput:		        0.00 MB/sec
Concurrency:		        1.92
Successful transactions:          20
Failed transactions:	           0
Longest transaction:	        0.07
Shortest transaction:	        0.00
</code></pre>
<!--kg-card-end: markdown--><p>Keeping in mind that we started with qty 2 in LOC-01 and qty 0 in LOC-02, let&apos;s see what we have now? NOTE - you will likely have different results, if the results look good please repeat the test since we are trying to demonstrate a concurrency issue.</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2019/04/image-4.png" class="kg-image" alt="Inventory Management Transfer" loading="lazy"></figure><p>You can look through the logs of the Scalatra service and try to spot where things first go off the rails. I added the following log line to the update source section.</p><!--kg-card-begin: markdown--><pre><code class="language-scala">updateSource &lt;- {
  fromRecord match {
    case Some(InventorySingleRecord(_, `sku`, srcQty, `fromLocation`)) =&gt;

      val destinationQty: Int = if (toRecord.isDefined) toRecord.get.qty else 0
      logger.debug(s&quot;Transfer $qty from:$fromLocation (had qty:$srcQty) to $toLocation (had qty:$destinationQty)&quot;)

      val q = for { x &lt;- this if x.location === fromLocation &amp;&amp; x.sku === sku } yield x.qty
      q.update(srcQty - qty)
    case _ =&gt;
      DBIO.failed(new Exception(&quot;Failed to find source location&quot;))
  }
}
</code></pre>
<!--kg-card-end: markdown--><p>An example of my Scalatra logs that demonstrate our consistency issue:</p><!--kg-card-begin: markdown--><pre><code class="language-text"># Two siege calls happen in close succession, both see qty 2 and move 1
Transfer 1 from:LOC-01 (had qty:2) to LOC-02 (had qty:0)
Transfer 1 from:LOC-01 (had qty:2) to LOC-02 (had qty:0)

# Two siege calls happen again but find that only 1 unit was moved
# We are already in an inconsistent state
Transfer 1 from:LOC-02 (had qty:1) to LOC-01 (had qty:1)
Transfer 1 from:LOC-02 (had qty:1) to LOC-01 (had qty:1)

# The problem just spirals out from here.
Transfer 1 from:LOC-01 (had qty:2) to LOC-02 (had qty:0)
Transfer 1 from:LOC-01 (had qty:1) to LOC-02 (had qty:0)
Transfer 1 from:LOC-02 (had qty:1) to LOC-01 (had qty:1)
Transfer 1 from:LOC-02 (had qty:0) to LOC-01 (had qty:2)
</code></pre>
<!--kg-card-end: markdown--><p>Why is this happening? First let&apos;s consider that we are executing multiple queries in a transaction, but our isolation level is the default Read Committed <a href="https://www.postgresql.org/docs/9.1/transaction-iso.html?ref=honstain.com">https://www.postgresql.org/docs/9.1/transaction-iso.html</a>. Both transactions overlap and as a result one of the clients does not cause the desired change.</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2019/04/image-5.png" class="kg-image" alt="Inventory Management Transfer" loading="lazy"></figure><h3 id="attempt-to-enforce-consistency">Attempt to Enforce Consistency</h3><p>Often the first thing people reach for is to ratchet up the isolation level and let the database sort things out. We will try that approach and &#xA0;jump our query up to Serializable isolation level and see what happens.</p><!--kg-card-begin: markdown--><pre><code class="language-scala">db.run(insert.transactionally.withTransactionIsolation(TransactionIsolation.Serializable))
</code></pre>
<!--kg-card-end: markdown--><p>Running siege again you might see something like this:</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2019/04/image-7.png" class="kg-image" alt="Inventory Management Transfer" loading="lazy"></figure><p>NOTE ON SIEGE - Siege is a brute force sort of tool, it will blindly attempt to keep cycling through its URLs. Hence a previous failure will not cause it to stop or retry (as currently configured), leading to unusual behavior for larger numbers of iterations as it makes no attempt to model a real user or the physical world.</p><p>I have organized the log data to better illustrate the success and failure of each call (assigning a color to each distinct transfer call). Note that I also started passing around a user id (random int assigned at the start of the rest call) to help us trace things, it probably would have been better if I referenced it as a tracing id / provenance id).</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2019/04/image-8.png" class="kg-image" alt="Inventory Management Transfer" loading="lazy"></figure><p>Now we have consistent data, but PostgreSQL is achieving that by aborting our transactions whenever it identifies a problem. It is helpful to reference the PostgreSQL docs at this stage <a href="https://www.postgresql.org/docs/9.1/transaction-iso.html?ref=honstain.com">https://www.postgresql.org/docs/9.1/transaction-iso.html</a></p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2019/04/image-9.png" class="kg-image" alt="Inventory Management Transfer" loading="lazy"></figure><p>If you siege long enough, you may also observe PostgreSQL identify and kill a deadlock.</p><pre><code class="language-text">14:40:20.312 [scala-execution-context-global-39] DEBUG o.b.h.inventory.app.ToyInventory - user: 543 - ERROR: deadlock detected
  Detail: Process 25421 waits for ShareLock on transaction 206692; blocked by process 25419.
Process 25419 waits for ShareLock on transaction 206691; blocked by process 25421.
  Hint: See server log for query details.
  Where: while updating tuple (0,91) in relation &quot;inventory_single&quot;</code></pre><p>I will leave it to the reader to try Repeatable Read as an exercise.</p><h2 id="summary">Summary</h2><p>We have made an attempt to implement the functionality for modeling the transfer of physical inventory from one location to another. While tests pass and things work on the happy path, when we introduce concurrency we have problems. In our next post we will explore some additional options to help us maintain consistency.</p>]]></content:encoded></item><item><title><![CDATA[Inventory Management Transfer]]></title><description><![CDATA[<p>Continuing in our series of posts about creating a basic Scalatra service for managing inventory, we would now like to implement the persistence logic to transfer inventory from one location to another.</p><p>Previous posts for our inventory service:</p><ul><li><a href="https://honstain.com/scalatra-inventory-management-service/">Creating a Scalatra Inventory Management Service</a></li><li><a href="https://honstain.com/slick-upsert-and-select/">Implementing Create/Update in Slick</a></li></ul><p><strong>WARNING</strong></p>]]></description><link>https://honstain.com/inventory-management-transfer-start/</link><guid isPermaLink="false">65b526ba7a5d430e36b8ec01</guid><category><![CDATA[Scala]]></category><category><![CDATA[Scalatra]]></category><category><![CDATA[PostgreSQL]]></category><category><![CDATA[Slick]]></category><dc:creator><![CDATA[Anthony Honstain]]></dc:creator><pubDate>Sat, 13 Apr 2019 22:11:34 GMT</pubDate><media:content url="https://honstain.com/content/images/2019/04/DB_Inventory_Transfer_Dirty.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://honstain.com/content/images/2019/04/DB_Inventory_Transfer_Dirty.jpg" alt="Inventory Management Transfer"><p>Continuing in our series of posts about creating a basic Scalatra service for managing inventory, we would now like to implement the persistence logic to transfer inventory from one location to another.</p><p>Previous posts for our inventory service:</p><ul><li><a href="https://honstain.com/scalatra-inventory-management-service/">Creating a Scalatra Inventory Management Service</a></li><li><a href="https://honstain.com/slick-upsert-and-select/">Implementing Create/Update in Slick</a></li></ul><p><strong>WARNING </strong>- This may be difficult to follow if you haven&apos;t been working through the previous posts. Reviewing this repository that implements this material covered so far may help you <a href="https://bitbucket.org/honstain/scalatra-single-record-transfer-service/src/master/?ref=honstain.com">https://bitbucket.org/honstain/scalatra-single-record-transfer-service/src/master/</a></p><h2 id="transfer-inventory-between-locations">Transfer Inventory Between Locations</h2><p>NOTE - I am going to intentionally start with naive implementation and demonstrate testing it (exposing issues). We will iterate on the design together.</p><p>Let&apos;s establish some basic requirements or expectations for this logic. We may initially relax some of these requirements to illustrative purposes (but this is where we are going).</p><ul><li>The source location must exist when attempting to transfer a SKU+qty from a source location to a destination location.</li><li>The source location has enough inventory to supply the requested qty being transferred. Said another way, we will not support going negative.</li></ul><p>To illustrate, if we wanted to transfer 1 unit of &apos;SKU-01&apos; from location &apos;LOC-01&apos; to &apos;LOC-02&apos;.</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2019/04/image.png" class="kg-image" alt="Inventory Management Transfer" loading="lazy"></figure><p>We will want to remove one from &apos;LOC-01&apos; and then increase the quantity of &apos;LOC-02&apos;. An initial test to help us validate this behavior for our DAO might look like:</p><!--kg-card-begin: markdown--><pre><code class="language-scala">def createInventoryHelper(sku: String, qty: Int, location: String): InventorySingleRecord = {
  val create = InventorySingleRecordDao.create(database, sku, qty, location)
  Await.result(create, Duration.Inf).get
}
  
test(&quot;transfer&quot;) {
  createInventoryHelper(TEST_SKU, 1, BIN_01)
  createInventoryHelper(TEST_SKU, 0, BIN_02)

  val futureTrans = InventorySingleRecordDao.transfer(database, TEST_SKU, 1, BIN_01, BIN_02)
  Await.result(futureTrans, Duration.Inf)

  val futureFind = InventorySingleRecordDao.findAll(database)
  val findResult: Seq[InventorySingleRecord] = Await.result(futureFind, Duration.Inf)

  findResult should contain only (
    InventorySingleRecord(Some(1), &quot;NewSku&quot;, 0, BIN_01),
    InventorySingleRecord(Some(2), &quot;NewSku&quot;, 1, BIN_02),
  )
}
</code></pre>
<!--kg-card-end: markdown--><p>Now to implement the <code>InventorySingleRecordDao.transfer</code> logic we can reference the upsert logic from our <a href="https://honstain.com/slick-upsert-and-select/">previous post</a> that uses the Scala for comprehension and a database transaction. We can start by just modifying the source location and ignore the destination location (the test won&apos;t pass, but we will be able to validate our transfer logic incrementally).</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2019/04/image-1.png" class="kg-image" alt="Inventory Management Transfer" loading="lazy"></figure><p>This gives us a basic overview of the transfer function in our DAO. We have a <code>for</code> comprehension with the query to retrieve the database record for the source (fromLocation) and then attempt to modify it (there is basic handling to address the possibility that we don&apos;t find the source record.</p><!--kg-card-begin: markdown--><pre><code class="language-scala">def transfer(db: PostgresProfile.backend.DatabaseDef,
             sku: String,
             qty: Int,
             fromLocation: String,
             toLocation: String
            ): Future[Int] = {

  val insert = for {
    fromRecord &lt;- {
      this.filter(x =&gt; x.location === fromLocation &amp;&amp; x.sku === sku).result.headOption
    }
    updateSource &lt;- {
      fromRecord match {
        case Some(InventorySingleRecord(_, `sku`, srcQty, `fromLocation`)) =&gt;
          val q = for { x &lt;- this if x.location === fromLocation &amp;&amp; x.sku === sku } yield x.qty
          q.update(srcQty - qty)
        case _ =&gt;
          DBIO.failed(new Exception(&quot;Failed to find source location&quot;))
      }
    }
  } yield updateSource
  db.run(insert.transactionally)
}
</code></pre>
<!--kg-card-end: markdown--><p>Running the test that just created, we will get a failure, but hopefully, be able to see that the source &apos;Bin-01&apos; is decremented.</p><pre><code class="language-text">Vector(
	InventorySingleRecord(Some(2),NewSku,0,Bin-02),
	InventorySingleRecord(Some(1),NewSku,0,Bin-01))
did not contain only (
	InventorySingleRecord(Some(1),NewSku,0,Bin-01), 
	InventorySingleRecord(Some(2),NewSku,1,Bin-02))</code></pre><p><strong>Now we need to update the destination record.</strong></p><p>Just like when we updated the source location, we will want to query for it, but it is more likely that this record does not already exist (at least in this example where we assume physical inventory already exists in a source location, but its much more likely that the sku+location combination of the destination is unknown to the database - remember that those two columns are integral to our design).</p><!--kg-card-begin: markdown--><pre><code class="language-scala">toRecord &lt;- {
  this.filter(x =&gt; x.location === toLocation &amp;&amp; x.sku === sku).result.headOption
}
</code></pre>
<!--kg-card-end: markdown--><p>Now that we possibly have a record for the destination, we can try to update or create. This should be very familiar if you worked through the <a href="https://honstain.com/slick-upsert-and-select/">upsert logic in my previous post</a>.</p><!--kg-card-begin: markdown--><pre><code class="language-scala">createUpdateDestination &lt;- {
  toRecord match {
    case Some(InventorySingleRecord(_, `sku`, destQty, `toLocation`)) =&gt;
      // Update
      logger.debug(s&quot;Transfer from:$fromLocation to $toLocation found $destQty in destination&quot;)
      val q = for { x &lt;- this if x.location === toLocation &amp;&amp; x.sku === sku } yield x.qty
      q.update(destQty + qty)
    case _ =&gt;
      // Create - this is likely susceptible to write skew
      this += InventorySingleRecord(Option.empty, sku, qty, toLocation)
  }
}
</code></pre>
<!--kg-card-end: markdown--><p>Hopefully, with these pieces all put together you get a passing test.</p><h3 id="current-code-for-a-very-basic-transfer-of-inventory">Current Code for a Very Basic Transfer of Inventory</h3><!--kg-card-begin: markdown--><pre><code class="language-scala">def transfer(db: PostgresProfile.backend.DatabaseDef,
             sku: String,
             qty: Int,
             fromLocation: String,
             toLocation: String
            ): Future[Int] = {

  val insert = for {
    toRecord &lt;- {
      this.filter(x =&gt; x.location === toLocation &amp;&amp; x.sku === sku).result.headOption
    }
    fromRecord &lt;- {
      this.filter(x =&gt; x.location === fromLocation &amp;&amp; x.sku === sku).result.headOption
    }
    createUpdateDestination &lt;- {
      toRecord match {
        case Some(InventorySingleRecord(_, `sku`, destQty, `toLocation`)) =&gt;
          // Update
          logger.debug(s&quot;Transfer from:$fromLocation to $toLocation found $destQty in destination&quot;)
          val q = for { x &lt;- this if x.location === toLocation &amp;&amp; x.sku === sku } yield x.qty
          q.update(destQty + qty)
        case _ =&gt;
          this += InventorySingleRecord(Option.empty, sku, qty, toLocation)
      }
    }
    updateSource &lt;- {
      fromRecord match {
        case Some(InventorySingleRecord(_, `sku`, srcQty, `fromLocation`)) =&gt;
          val q = for { x &lt;- this if x.location === fromLocation &amp;&amp; x.sku === sku } yield x.qty
          q.update(srcQty - qty)
        case _ =&gt;
          DBIO.failed(new Exception(&quot;Failed to find source location&quot;))
      }
    }
  } yield updateSource
  db.run(insert.transactionally)
}
</code></pre>
<!--kg-card-end: markdown--><h2 id="exposing-the-flaws-in-this-initial-design">Exposing the Flaws in this Initial Design</h2><p>At this stage we hopefully have a basic DAO that implements the following.</p><!--kg-card-begin: markdown--><pre><code class="language-scala">def findAll(db: PostgresProfile.backend.DatabaseDef): Future[Seq[InventorySingleRecord]]

def create(db: PostgresProfile.backend.DatabaseDef,
           sku: String,
           qty: Int,
           location: String
          ): Future[Option[InventorySingleRecord]]

def transfer(db: PostgresProfile.backend.DatabaseDef,
             sku: String,
             qty: Int,
             fromLocation: String,
             toLocation: String
            ): Future[Int]
</code></pre>
<!--kg-card-end: markdown--><p>I created a repository with a working Scala Scalatra service to snapshot this stage of the development:</p><ul><li><a href="https://bitbucket.org/honstain/scalatra-single-record-transfer-service/src/master/?ref=honstain.com">https://bitbucket.org/honstain/scalatra-single-record-transfer-service/src/master/</a></li></ul><p>This repo provides you an example of Scalatra service that exposes our DAO operations via a set of crude HTTP endpoints. The README.md provides some points on starting the service. If this is unfamiliar or you would like a refresher, I suggest reviewing the previous post where I covered just this aspect of Scalatra <a href="https://honstain.com/rest-in-a-scalatra-service/">http://honstain.com/rest-in-a-scalatra-service/</a>.</p><p>Before beginning this next step, I suggest you verify that you have a service that runs, with a database, and can respond to HTTP requests.</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2019/04/image-2.png" class="kg-image" alt="Inventory Management Transfer" loading="lazy"></figure><p>Using a tool like Postman to transfer inventory between LOC-01 and LOC-02 you may notice that doesn&apos;t enforce any constraints yet, it can result in negative quantities and moving negative amounts. What we are really interested in how consistent the database will be with our current DAO queries.</p><h3 id="exposing-consistency-problems">Exposing Consistency Problems</h3><p>One way to test our system might be to attempt to move inventory in parallel (for the same SKU and location group). Let&apos;s start with just two locations and a single SKU, trying to move back and forth between LOC-01 and LOC-02 should be enough to expose an issue.</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2019/04/image-3.png" class="kg-image" alt="Inventory Management Transfer" loading="lazy"></figure><p>There are several tools that you could use for load and performance testing. You could even write your own script or test. I have opted to use <code>siege</code> which is a common command line tool you can probably retrieve from your Linux package manager (<a href="https://www.joedog.org/siege-home/?ref=honstain.com">https://www.joedog.org/siege-home/</a>). By defining a set of URLs we will use siege to execute the following in parallel:</p><ul><li>Get all the inventory</li><li>Transfer 1 qty of SKU <code>SKU-01</code> from <code>LOC-01</code> to <code>LOC-02</code></li><li>Transfer 1 qty of SKU <code>SKU-01</code> from <code>LOC-02</code> to <code>LOC-01</code></li></ul><!--kg-card-begin: markdown--><pre><code class="language-text"># siege_urls.txt - a urls file https://www.joedog.org/siege-manual/#a05

127.0.0.1:8080/

127.0.0.1:8080/transfer POST {&quot;sku&quot;: &quot;SKU-01&quot;,&quot;qty&quot;: 1,&quot;fromLocation&quot;: &quot;LOC-01&quot;,&quot;toLocation&quot;: &quot;LOC-02&quot;}

127.0.0.1:8080/transfer POST {&quot;sku&quot;: &quot;SKU-01&quot;,&quot;qty&quot;: 1,&quot;fromLocation&quot;: &quot;LOC-02&quot;,&quot;toLocation&quot;: &quot;LOC-01&quot;}
</code></pre>
<!--kg-card-end: markdown--><p>This command will start siege running with the following arguments:</p><ul><li><code>-v</code> verbose mode</li><li><code>-c2</code> 2 concurrent requests</li><li><code>-r10</code> run the test 10 times</li><li><code>--content-type</code> specifying the content type for our API</li><li><code>-f</code> the URLs file</li></ul><p><code>siege -v -c2 -r10 --content-type &quot;application/json&quot; -f siege_urls.txt</code></p><!--kg-card-begin: markdown--><pre><code class="language-text">** SIEGE 4.0.4
** Preparing 2 concurrent users for battle.
The server is now under siege...
HTTP/1.1 200     0.01 secs:      91 bytes ==&gt; GET  /
HTTP/1.1 200     0.07 secs:       0 bytes ==&gt; POST http://127.0.0.1:8080/transfer
HTTP/1.1 200     0.06 secs:       0 bytes ==&gt; POST http://127.0.0.1:8080/transfer
HTTP/1.1 200     0.04 secs:       0 bytes ==&gt; POST http://127.0.0.1:8080/transfer
HTTP/1.1 200     0.00 secs:      91 bytes ==&gt; GET  /
HTTP/1.1 200     0.03 secs:       0 bytes ==&gt; POST http://127.0.0.1:8080/transfer
HTTP/1.1 200     0.02 secs:      91 bytes ==&gt; GET  /
HTTP/1.1 200     0.04 secs:       0 bytes ==&gt; POST http://127.0.0.1:8080/transfer
HTTP/1.1 200     0.04 secs:       0 bytes ==&gt; POST http://127.0.0.1:8080/transfer
HTTP/1.1 200     0.04 secs:       0 bytes ==&gt; POST http://127.0.0.1:8080/transfer
HTTP/1.1 200     0.01 secs:      91 bytes ==&gt; GET  /
HTTP/1.1 200     0.07 secs:       0 bytes ==&gt; POST http://127.0.0.1:8080/transfer
HTTP/1.1 200     0.01 secs:      92 bytes ==&gt; GET  /
HTTP/1.1 200     0.06 secs:       0 bytes ==&gt; POST http://127.0.0.1:8080/transfer
HTTP/1.1 200     0.05 secs:       0 bytes ==&gt; POST http://127.0.0.1:8080/transfer
HTTP/1.1 200     0.07 secs:       0 bytes ==&gt; POST http://127.0.0.1:8080/transfer
HTTP/1.1 200     0.01 secs:      91 bytes ==&gt; GET  /
HTTP/1.1 200     0.05 secs:       0 bytes ==&gt; POST http://127.0.0.1:8080/transfer
HTTP/1.1 200     0.01 secs:      92 bytes ==&gt; GET  /
HTTP/1.1 200     0.04 secs:       0 bytes ==&gt; POST http://127.0.0.1:8080/transfer

Transactions:		          20 hits
Availability:		      100.00 %
Elapsed time:		        0.38 secs
Data transferred:	        0.00 MB
Response time:		        0.04 secs
Transaction rate:	       52.63 trans/sec
Throughput:		        0.00 MB/sec
Concurrency:		        1.92
Successful transactions:          20
Failed transactions:	           0
Longest transaction:	        0.07
Shortest transaction:	        0.00
</code></pre>
<!--kg-card-end: markdown--><p>Keeping in mind that we started with qty 2 in LOC-01 and qty 0 in LOC-02, let&apos;s see what we have now? NOTE - you will likely have different results, if the results look good please repeat the test since we are trying to demonstrate a concurrency issue.</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2019/04/image-4.png" class="kg-image" alt="Inventory Management Transfer" loading="lazy"></figure><p>You can look through the logs of the Scalatra service and try to spot where things first go off the rails. I added the following log line to the update source section.</p><!--kg-card-begin: markdown--><pre><code class="language-scala">updateSource &lt;- {
  fromRecord match {
    case Some(InventorySingleRecord(_, `sku`, srcQty, `fromLocation`)) =&gt;

      val destinationQty: Int = if (toRecord.isDefined) toRecord.get.qty else 0
      logger.debug(s&quot;Transfer $qty from:$fromLocation (had qty:$srcQty) to $toLocation (had qty:$destinationQty)&quot;)

      val q = for { x &lt;- this if x.location === fromLocation &amp;&amp; x.sku === sku } yield x.qty
      q.update(srcQty - qty)
    case _ =&gt;
      DBIO.failed(new Exception(&quot;Failed to find source location&quot;))
  }
}
</code></pre>
<!--kg-card-end: markdown--><p>An example of my Scalatra logs that demonstrate our consistency issue:</p><!--kg-card-begin: markdown--><pre><code class="language-text"># Two siege calls happen in close succession, both see qty 2 and move 1
Transfer 1 from:LOC-01 (had qty:2) to LOC-02 (had qty:0)
Transfer 1 from:LOC-01 (had qty:2) to LOC-02 (had qty:0)

# Two siege calls happen again but find that only 1 unit was moved
# We are already in an inconsistent state
Transfer 1 from:LOC-02 (had qty:1) to LOC-01 (had qty:1)
Transfer 1 from:LOC-02 (had qty:1) to LOC-01 (had qty:1)

# The problem just spirals out from here.
Transfer 1 from:LOC-01 (had qty:2) to LOC-02 (had qty:0)
Transfer 1 from:LOC-01 (had qty:1) to LOC-02 (had qty:0)
Transfer 1 from:LOC-02 (had qty:1) to LOC-01 (had qty:1)
Transfer 1 from:LOC-02 (had qty:0) to LOC-01 (had qty:2)
</code></pre>
<!--kg-card-end: markdown--><p>Why is this happening? First let&apos;s consider that we are executing multiple queries in a transaction, but our isolation level is the default Read Committed <a href="https://www.postgresql.org/docs/9.1/transaction-iso.html?ref=honstain.com">https://www.postgresql.org/docs/9.1/transaction-iso.html</a>. Both transactions overlap and as a result one of the clients does not cause the desired change.</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2019/04/image-5.png" class="kg-image" alt="Inventory Management Transfer" loading="lazy"></figure><h3 id="attempt-to-enforce-consistency">Attempt to Enforce Consistency</h3><p>Often the first thing people reach for is to ratchet up the isolation level and let the database sort things out. We will try that approach and &#xA0;jump our query up to Serializable isolation level and see what happens.</p><!--kg-card-begin: markdown--><pre><code class="language-scala">db.run(insert.transactionally.withTransactionIsolation(TransactionIsolation.Serializable))
</code></pre>
<!--kg-card-end: markdown--><p>Running siege again you might see something like this:</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2019/04/image-7.png" class="kg-image" alt="Inventory Management Transfer" loading="lazy"></figure><p>NOTE ON SIEGE - Siege is a brute force sort of tool, it will blindly attempt to keep cycling through its URLs. Hence a previous failure will not cause it to stop or retry (as currently configured), leading to unusual behavior for larger numbers of iterations as it makes no attempt to model a real user or the physical world.</p><p>I have organized the log data to better illustrate the success and failure of each call (assigning a color to each distinct transfer call). Note that I also started passing around a user id (random int assigned at the start of the rest call) to help us trace things, it probably would have been better if I referenced it as a tracing id / provenance id).</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2019/04/image-8.png" class="kg-image" alt="Inventory Management Transfer" loading="lazy"></figure><p>Now we have consistent data, but PostgreSQL is achieving that by aborting our transactions whenever it identifies a problem. It is helpful to reference the PostgreSQL docs at this stage <a href="https://www.postgresql.org/docs/9.1/transaction-iso.html?ref=honstain.com">https://www.postgresql.org/docs/9.1/transaction-iso.html</a></p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2019/04/image-9.png" class="kg-image" alt="Inventory Management Transfer" loading="lazy"></figure><p>If you siege long enough, you may also observe PostgreSQL identify and kill a deadlock.</p><pre><code class="language-text">14:40:20.312 [scala-execution-context-global-39] DEBUG o.b.h.inventory.app.ToyInventory - user: 543 - ERROR: deadlock detected
  Detail: Process 25421 waits for ShareLock on transaction 206692; blocked by process 25419.
Process 25419 waits for ShareLock on transaction 206691; blocked by process 25421.
  Hint: See server log for query details.
  Where: while updating tuple (0,91) in relation &quot;inventory_single&quot;</code></pre><p>I will leave it to the reader to try Repeatable Read as an exercise.</p><h2 id="summary">Summary</h2><p>We have made an attempt to implement the functionality for modeling the transfer of physical inventory from one location to another. While tests pass and things work on the happy path, when we introduce concurrency we have problems. In our next post we will explore some additional options to help us maintain consistency.</p>]]></content:encoded></item><item><title><![CDATA[Slick Upsert and Select]]></title><description><![CDATA[<p>In our previous post, we wanted to create and update a record from our PostgreSQL database in our Scalatra service to manage inventory data.</p><ul><li><a href="https://honstain.com/scalatra-inventory-management-service/">Creating a Scalatra Inventory Management Service</a></li></ul><p>We only got as far as using raw SQL to do the query, and this had an added benefit of</p>]]></description><link>https://honstain.com/slick-upsert-and-select-2/</link><guid isPermaLink="false">65b52aaf7a5d430e36b8ec8a</guid><category><![CDATA[Scalatra]]></category><category><![CDATA[Scala]]></category><category><![CDATA[Slick]]></category><category><![CDATA[PostgreSQL]]></category><dc:creator><![CDATA[Anthony Honstain]]></dc:creator><pubDate>Wed, 03 Apr 2019 15:22:00 GMT</pubDate><media:content url="https://honstain.com/content/images/2019/04/scala_upsert.PNG" medium="image"/><content:encoded><![CDATA[<img src="https://honstain.com/content/images/2019/04/scala_upsert.PNG" alt="Slick Upsert and Select"><p>In our previous post, we wanted to create and update a record from our PostgreSQL database in our Scalatra service to manage inventory data.</p><ul><li><a href="https://honstain.com/scalatra-inventory-management-service/">Creating a Scalatra Inventory Management Service</a></li></ul><p>We only got as far as using raw SQL to do the query, and this had an added benefit of being an atomic operation. Now we would like to try and implement the same logic using Slick.</p><h3 id="improving-the-return-type-of-our-raw-sql-query">Improving the Return Type of our Raw SQL Query</h3><p>A helpful reference here is <a href="http://slick.lightbend.com/doc/3.3.0/sql.html?ref=honstain.com">http://slick.lightbend.com/doc/3.3.0/sql.html</a>. I would have saved my self some time by sitting down and reading it end to end before I started.</p><p>We had the following when we left off last time:</p><!--kg-card-begin: markdown--><pre><code class="language-scala">  def create(db: PostgresProfile.backend.DatabaseDef,
               sku: String,
               qty: Int,
               location: String
              ): Future[Seq[(String, Int, String)]] = {
    val query: DBIO[Seq[(String, Int, String)]] =
      sql&quot;&quot;&quot;
           INSERT INTO inventory_single (sku, qty, location)
           VALUES ($sku, $qty, $location)
           ON CONFLICT ON CONSTRAINT inventory_single_sku_location_key
              DO UPDATE SET qty = EXCLUDED.qty
           RETURNING sku, qty, location;
        &quot;&quot;&quot;.as[(String, Int, String)]
    db.run(query)
  }
</code></pre>
<!--kg-card-end: markdown--><p>This code was handling and returning the tuple <code>(String, Int, String)</code> instead of the <code>InventorySingleRecord</code> case class.</p><p>Slick provides us with <code>sql</code>, <code>sqlu</code>, and <code>tsql</code> interpolators.</p><ul><li><code>sql</code> is used for queries to produce a sequence of tuples, has type <code>DBIO[Seq[&lt;tuples&gt;]]</code> and can use implicit GetResult converters.</li><li><code>sqlu</code> is used for queries that produce a row count, has type <code>DBIO[Int]</code></li><li><code>tsql</code> can enforce compile-time type checking, requires access to a configuration that defines the database schema.</li></ul><p>We will continue to use the <code>sql</code> interpolator. Let&apos;s define our own converter to the <code>InventorySingleRecord</code> class and update this query (you will need to update your test as well).</p><!--kg-card-begin: markdown--><pre><code class="language-scala">  implicit val getInventorySingleRecord : GetResult[InventorySingleRecord] =
    GetResult(r =&gt; InventorySingleRecord(r.&lt;&lt;, r.&lt;&lt;, r.&lt;&lt;, r.&lt;&lt;))

  def create(db: PostgresProfile.backend.DatabaseDef,
               sku: String,
               qty: Int,
               location: String
              ): Future[Seq[InventorySingleRecord]] = {

    val query: DBIO[Seq[InventorySingleRecord]] =
      sql&quot;&quot;&quot;
           INSERT INTO inventory_single (sku, qty, location)
           VALUES ($sku, $qty, $location)
           ON CONFLICT ON CONSTRAINT inventory_single_sku_location_key
              DO UPDATE SET qty = EXCLUDED.qty
           RETURNING id, sku, qty, location;
        &quot;&quot;&quot;.as[InventorySingleRecord]
    db.run(query)
  }
</code></pre>
<!--kg-card-end: markdown--><p>Now we have the desired type instead of a tuple, but we still have a sequence <code>Seq[InventorySingleRecord]</code>, which is not ideal given that this is a create for a single record. This can be addressed with headOption (<a href="https://www.garysieling.com/blog/scala-headoption-example?ref=honstain.com">https://www.garysieling.com/blog/scala-headoption-example</a>) to get the first element and update the return type <code>Option[InventorySingleRecord]</code></p><!--kg-card-begin: markdown--><pre><code class="language-scala">  def create(db: PostgresProfile.backend.DatabaseDef,
               sku: String,
               qty: Int,
               location: String
              ): Future[Option[InventorySingleRecord]] = {

    val query: DBIO[Option[InventorySingleRecord]] =
      sql&quot;&quot;&quot;
           INSERT INTO inventory_single (sku, qty, location)
           VALUES ($sku, $qty, $location)
           ON CONFLICT ON CONSTRAINT inventory_single_sku_location_key
              DO UPDATE SET qty = EXCLUDED.qty
           RETURNING id, sku, qty, location;
        &quot;&quot;&quot;.as[InventorySingleRecord].headOption
    db.run(query)
</code></pre>
<!--kg-card-end: markdown--><p>Our tests then go to validating the option instead of a sequence.</p><!--kg-card-begin: markdown--><pre><code class="language-scala">val future = InventorySingleRecordDao.create(database, TEST_SKU, 1, BIN_01)
val result: Option[InventorySingleRecord] = Await.result(future, Duration.Inf)
result should equal(Some(InventorySingleRecord(Some(1), TEST_SKU, 1, BIN_01)))
</code></pre>
<!--kg-card-end: markdown--><h3 id="implement-the-raw-sql-as-a-slick-query">Implement the raw SQL as a Slick Query</h3><p>Now let&apos;s write the same query using Slick. We will start by using a Scala <code>for</code> comprehension (a nice reference <a href="https://medium.com/@scalaisfun/scala-for-comprehension-tricks-9c8b9fe31778?ref=honstain.com">https://medium.com/@scalaisfun/scala-for-comprehension-tricks-9c8b9fe31778</a>).</p><!--kg-card-begin: markdown--><pre><code class="language-scala">  def create(db: PostgresProfile.backend.DatabaseDef,
               sku: String,
               qty: Int,
               location: String
              ): Future[Option[InventorySingleRecord]] = {
              
    val upsert = for {
      existing &lt;- {
        this.filter(x =&gt; x.location === location &amp;&amp; x.sku === sku).forUpdate.result.headOption
      }
    } yield existing
    db.run(upsert.transactionally)
  }
</code></pre>
<!--kg-card-end: markdown--><p>This first step lets us retrieve a record if it already exists, and this is the model we will use to add additional functionality. </p><p>We will compose several queries here:</p><ul><li>One to find the existing record (if it exists) <code>this.filter(x =&gt; x.location === location &amp;&amp; x.sku === sku).forUpdate.result.headOption</code>,</li><li>A Query to create <code>TableQuery[InventorySingleRecords] += InventorySingleRecord(Option.empty, sku, qty, location)</code> </li><li>Finally one to retrieve the value <code>TableQuery[InventorySingleRecords].filter(x =&gt; x.location === location &amp;&amp; x.sku === sku).result.headOption</code></li></ul><p>I have left the pattern matching unimplemented for the update case just to try and keep things reasonably simple.</p><!--kg-card-begin: markdown--><pre><code class="language-scala">  def create(db: PostgresProfile.backend.DatabaseDef,
               sku: String,
               qty: Int,
               location: String
              ): Future[Option[InventorySingleRecord]] = {
    val upsert = for {
      existing &lt;- {
        this.filter(x =&gt; x.location === location &amp;&amp; x.sku === sku).forUpdate.result.headOption
      }
      _ &lt;- {
        existing match {
          case Some(InventorySingleRecord(_, `sku`, _, `location`)) =&gt; 
            // Update
            ???
          case _ =&gt; 
            // Create a new record
            TableQuery[InventorySingleRecords] += InventorySingleRecord(Option.empty, sku, qty, location)
        }
      }
      updated &lt;- {
        TableQuery[InventorySingleRecords].filter(x =&gt; x.location === location &amp;&amp; x.sku === sku).result.headOption
      }
    } yield updated
    db.run(upsert.transactionally)
  }
</code></pre>
<!--kg-card-end: markdown--><p>This should satisfy out tests for a single create, but update is still not implemented.</p><p>The logic for doing an update can be implemented as a standard Slick update query. </p><!--kg-card-begin: markdown--><pre><code class="language-scala">        existing match {
          case Some(InventorySingleRecord(_, `sku`, _, `location`)) =&gt; // Update
            val updateFoo = TableQuery[InventorySingleRecords]
            val q = for {x &lt;- updateFoo if x.location === location &amp;&amp; x.sku === sku} yield x.qty
            q.update(qty)
</code></pre>
<!--kg-card-end: markdown--><p>Note that in our update logic we have opted to ignore the existing value for the <code>qty</code> column. You could certainly utilize it if we wanted to log or use it in your update logic, that might look something like this <code>case Some(InventorySingleRecord(_, sku, existingQty, location)) =&gt;</code></p><p>This now should satisfy the tests for both create and update.</p><h3 id="conclusion">Conclusion</h3><p>We have implemented our create logic in two different ways, one in raw SQL and the other using the Slick functional relational mapper. Hopefully, you found the comparison helpful.</p>]]></content:encoded></item><item><title><![CDATA[Slick Upsert and Select]]></title><description><![CDATA[<p>In our previous post, we wanted to create and update a record from our PostgreSQL database in our Scalatra service to manage inventory data.</p><ul><li><a href="https://honstain.com/scalatra-inventory-management-service/">Creating a Scalatra Inventory Management Service</a></li></ul><p>We only got as far as using raw SQL to do the query, and this had an added benefit of</p>]]></description><link>https://honstain.com/slick-upsert-and-select/</link><guid isPermaLink="false">65b526ba7a5d430e36b8ec00</guid><category><![CDATA[Scalatra]]></category><category><![CDATA[Scala]]></category><category><![CDATA[Slick]]></category><category><![CDATA[PostgreSQL]]></category><dc:creator><![CDATA[Anthony Honstain]]></dc:creator><pubDate>Wed, 03 Apr 2019 15:22:00 GMT</pubDate><media:content url="https://honstain.com/content/images/2019/04/scala_upsert.PNG" medium="image"/><content:encoded><![CDATA[<img src="https://honstain.com/content/images/2019/04/scala_upsert.PNG" alt="Slick Upsert and Select"><p>In our previous post, we wanted to create and update a record from our PostgreSQL database in our Scalatra service to manage inventory data.</p><ul><li><a href="https://honstain.com/scalatra-inventory-management-service/">Creating a Scalatra Inventory Management Service</a></li></ul><p>We only got as far as using raw SQL to do the query, and this had an added benefit of being an atomic operation. Now we would like to try and implement the same logic using Slick.</p><h3 id="improving-the-return-type-of-our-raw-sql-query">Improving the Return Type of our Raw SQL Query</h3><p>A helpful reference here is <a href="http://slick.lightbend.com/doc/3.3.0/sql.html?ref=honstain.com">http://slick.lightbend.com/doc/3.3.0/sql.html</a>. I would have saved my self some time by sitting down and reading it end to end before I started.</p><p>We had the following when we left off last time:</p><!--kg-card-begin: markdown--><pre><code class="language-scala">  def create(db: PostgresProfile.backend.DatabaseDef,
               sku: String,
               qty: Int,
               location: String
              ): Future[Seq[(String, Int, String)]] = {
    val query: DBIO[Seq[(String, Int, String)]] =
      sql&quot;&quot;&quot;
           INSERT INTO inventory_single (sku, qty, location)
           VALUES ($sku, $qty, $location)
           ON CONFLICT ON CONSTRAINT inventory_single_sku_location_key
              DO UPDATE SET qty = EXCLUDED.qty
           RETURNING sku, qty, location;
        &quot;&quot;&quot;.as[(String, Int, String)]
    db.run(query)
  }
</code></pre>
<!--kg-card-end: markdown--><p>This code was handling and returning the tuple <code>(String, Int, String)</code> instead of the <code>InventorySingleRecord</code> case class.</p><p>Slick provides us with <code>sql</code>, <code>sqlu</code>, and <code>tsql</code> interpolators.</p><ul><li><code>sql</code> is used for queries to produce a sequence of tuples, has type <code>DBIO[Seq[&lt;tuples&gt;]]</code> and can use implicit GetResult converters.</li><li><code>sqlu</code> is used for queries that produce a row count, has type <code>DBIO[Int]</code></li><li><code>tsql</code> can enforce compile-time type checking, requires access to a configuration that defines the database schema.</li></ul><p>We will continue to use the <code>sql</code> interpolator. Let&apos;s define our own converter to the <code>InventorySingleRecord</code> class and update this query (you will need to update your test as well).</p><!--kg-card-begin: markdown--><pre><code class="language-scala">  implicit val getInventorySingleRecord : GetResult[InventorySingleRecord] =
    GetResult(r =&gt; InventorySingleRecord(r.&lt;&lt;, r.&lt;&lt;, r.&lt;&lt;, r.&lt;&lt;))

  def create(db: PostgresProfile.backend.DatabaseDef,
               sku: String,
               qty: Int,
               location: String
              ): Future[Seq[InventorySingleRecord]] = {

    val query: DBIO[Seq[InventorySingleRecord]] =
      sql&quot;&quot;&quot;
           INSERT INTO inventory_single (sku, qty, location)
           VALUES ($sku, $qty, $location)
           ON CONFLICT ON CONSTRAINT inventory_single_sku_location_key
              DO UPDATE SET qty = EXCLUDED.qty
           RETURNING id, sku, qty, location;
        &quot;&quot;&quot;.as[InventorySingleRecord]
    db.run(query)
  }
</code></pre>
<!--kg-card-end: markdown--><p>Now we have the desired type instead of a tuple, but we still have a sequence <code>Seq[InventorySingleRecord]</code>, which is not ideal given that this is a create for a single record. This can be addressed with headOption (<a href="https://www.garysieling.com/blog/scala-headoption-example?ref=honstain.com">https://www.garysieling.com/blog/scala-headoption-example</a>) to get the first element and update the return type <code>Option[InventorySingleRecord]</code></p><!--kg-card-begin: markdown--><pre><code class="language-scala">  def create(db: PostgresProfile.backend.DatabaseDef,
               sku: String,
               qty: Int,
               location: String
              ): Future[Option[InventorySingleRecord]] = {

    val query: DBIO[Option[InventorySingleRecord]] =
      sql&quot;&quot;&quot;
           INSERT INTO inventory_single (sku, qty, location)
           VALUES ($sku, $qty, $location)
           ON CONFLICT ON CONSTRAINT inventory_single_sku_location_key
              DO UPDATE SET qty = EXCLUDED.qty
           RETURNING id, sku, qty, location;
        &quot;&quot;&quot;.as[InventorySingleRecord].headOption
    db.run(query)
</code></pre>
<!--kg-card-end: markdown--><p>Our tests then go to validating the option instead of a sequence.</p><!--kg-card-begin: markdown--><pre><code class="language-scala">val future = InventorySingleRecordDao.create(database, TEST_SKU, 1, BIN_01)
val result: Option[InventorySingleRecord] = Await.result(future, Duration.Inf)
result should equal(Some(InventorySingleRecord(Some(1), TEST_SKU, 1, BIN_01)))
</code></pre>
<!--kg-card-end: markdown--><h3 id="implement-the-raw-sql-as-a-slick-query">Implement the raw SQL as a Slick Query</h3><p>Now let&apos;s write the same query using Slick. We will start by using a Scala <code>for</code> comprehension (a nice reference <a href="https://medium.com/@scalaisfun/scala-for-comprehension-tricks-9c8b9fe31778?ref=honstain.com">https://medium.com/@scalaisfun/scala-for-comprehension-tricks-9c8b9fe31778</a>).</p><!--kg-card-begin: markdown--><pre><code class="language-scala">  def create(db: PostgresProfile.backend.DatabaseDef,
               sku: String,
               qty: Int,
               location: String
              ): Future[Option[InventorySingleRecord]] = {
              
    val upsert = for {
      existing &lt;- {
        this.filter(x =&gt; x.location === location &amp;&amp; x.sku === sku).forUpdate.result.headOption
      }
    } yield existing
    db.run(upsert.transactionally)
  }
</code></pre>
<!--kg-card-end: markdown--><p>This first step lets us retrieve a record if it already exists, and this is the model we will use to add additional functionality. </p><p>We will compose several queries here:</p><ul><li>One to find the existing record (if it exists) <code>this.filter(x =&gt; x.location === location &amp;&amp; x.sku === sku).forUpdate.result.headOption</code>,</li><li>A Query to create <code>TableQuery[InventorySingleRecords] += InventorySingleRecord(Option.empty, sku, qty, location)</code> </li><li>Finally one to retrieve the value <code>TableQuery[InventorySingleRecords].filter(x =&gt; x.location === location &amp;&amp; x.sku === sku).result.headOption</code></li></ul><p>I have left the pattern matching unimplemented for the update case just to try and keep things reasonably simple.</p><!--kg-card-begin: markdown--><pre><code class="language-scala">  def create(db: PostgresProfile.backend.DatabaseDef,
               sku: String,
               qty: Int,
               location: String
              ): Future[Option[InventorySingleRecord]] = {
    val upsert = for {
      existing &lt;- {
        this.filter(x =&gt; x.location === location &amp;&amp; x.sku === sku).forUpdate.result.headOption
      }
      _ &lt;- {
        existing match {
          case Some(InventorySingleRecord(_, `sku`, _, `location`)) =&gt; 
            // Update
            ???
          case _ =&gt; 
            // Create a new record
            TableQuery[InventorySingleRecords] += InventorySingleRecord(Option.empty, sku, qty, location)
        }
      }
      updated &lt;- {
        TableQuery[InventorySingleRecords].filter(x =&gt; x.location === location &amp;&amp; x.sku === sku).result.headOption
      }
    } yield updated
    db.run(upsert.transactionally)
  }
</code></pre>
<!--kg-card-end: markdown--><p>This should satisfy out tests for a single create, but update is still not implemented.</p><p>The logic for doing an update can be implemented as a standard Slick update query. </p><!--kg-card-begin: markdown--><pre><code class="language-scala">        existing match {
          case Some(InventorySingleRecord(_, `sku`, _, `location`)) =&gt; // Update
            val updateFoo = TableQuery[InventorySingleRecords]
            val q = for {x &lt;- updateFoo if x.location === location &amp;&amp; x.sku === sku} yield x.qty
            q.update(qty)
</code></pre>
<!--kg-card-end: markdown--><p>Note that in our update logic we have opted to ignore the existing value for the <code>qty</code> column. You could certainly utilize it if we wanted to log or use it in your update logic, that might look something like this <code>case Some(InventorySingleRecord(_, sku, existingQty, location)) =&gt;</code></p><p>This now should satisfy the tests for both create and update.</p><h3 id="conclusion">Conclusion</h3><p>We have implemented our create logic in two different ways, one in raw SQL and the other using the Slick functional relational mapper. Hopefully, you found the comparison helpful.</p>]]></content:encoded></item><item><title><![CDATA[Scalatra Inventory Management Service]]></title><description><![CDATA[<h2 id="overview">Overview </h2><p>In our previous set of posts, we added progressively more functionality to our basic Scalatra service. </p><ul><li><a href="https://honstain.com/scalatra-giter8/">Creating a Scalatra service</a> </li><li><a href="https://honstain.com/rest-in-a-scalatra-service/">Scalatra and REST</a></li><li><a href="https://honstain.com/scalatra-2-6-4-postgresql/">Scalatra with Slick and PostgreSQL</a></li></ul><p>Now we would like to explore implementing a system for tracking inventory. What you will get from this post:</p><ul><li>Creating a</li></ul>]]></description><link>https://honstain.com/scalatra-inventory-management-service-2/</link><guid isPermaLink="false">65b52aaf7a5d430e36b8ec89</guid><category><![CDATA[Scala]]></category><category><![CDATA[Scalatra]]></category><category><![CDATA[PostgreSQL]]></category><category><![CDATA[Slick]]></category><dc:creator><![CDATA[Anthony Honstain]]></dc:creator><pubDate>Sat, 16 Mar 2019 19:57:14 GMT</pubDate><media:content url="https://honstain.com/content/images/2019/03/single_record_banner.JPG" medium="image"/><content:encoded><![CDATA[<h2 id="overview">Overview </h2><img src="https://honstain.com/content/images/2019/03/single_record_banner.JPG" alt="Scalatra Inventory Management Service"><p>In our previous set of posts, we added progressively more functionality to our basic Scalatra service. </p><ul><li><a href="https://honstain.com/scalatra-giter8/">Creating a Scalatra service</a> </li><li><a href="https://honstain.com/rest-in-a-scalatra-service/">Scalatra and REST</a></li><li><a href="https://honstain.com/scalatra-2-6-4-postgresql/">Scalatra with Slick and PostgreSQL</a></li></ul><p>Now we would like to explore implementing a system for tracking inventory. What you will get from this post:</p><ul><li>Creating a DB model for a simplified service (tracking inventory)</li><li>Using Slick to query that data</li><li>Using Slick to insert data</li><li>Using Slick and PostgreSQL to upsert</li><li>Tests for the new logic.</li></ul><h2 id="creating-a-service-to-track-inventory-levels">Creating a Service to Track Inventory Levels</h2><h3 id="create-the-database-schema">Create the Database Schema</h3><p>Let&apos;s start with a very basic schema that can track a product/SKU (if you are unfamiliar with the SKU terminology I recommend reading <a href="https://en.wikipedia.org/wiki/Stock_keeping_unit?ref=honstain.com">https://en.wikipedia.org/wiki/Stock_keeping_unit</a>), a location, and a quantity. A location could have multiple SKU&apos;s in it, and a SKU can be in more than one location.</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2019/03/image.png" class="kg-image" alt="Scalatra Inventory Management Service" loading="lazy"></figure><!--kg-card-begin: markdown--><pre><code class="language-SQL">CREATE TABLE inventory_single
(
  id bigserial NOT NULL,
  sku text,
  qty integer,
  location text,
  CONSTRAINT pk_single PRIMARY KEY (id),
  UNIQUE (sku, location)
);
</code></pre>
<!--kg-card-end: markdown--><p>We can create a few example pieces of inventory:</p><!--kg-card-begin: markdown--><pre><code class="language-SQL">INSERT INTO inventory_single(sku, qty, location) VALUES
(&apos;SKU-01&apos;, 2, &apos;LOC-01&apos;),
(&apos;SKU-01&apos;, 0, &apos;LOC-02&apos;)
;
</code></pre>
<!--kg-card-end: markdown--><h3 id="create-a-dao-for-slick">Create a DAO for Slick</h3><p>Now that we have our database schema, lets set up our Scala code to access it. I have chosen here to abstract my database access and Slick queries with DAO object. I referred to this when trying to organize my DAO <a href="https://sap1ens.com/blog/2015/07/26/scala-slick-3-how-to-start/?ref=honstain.com">https://sap1ens.com/blog/2015/07/26/scala-slick-3-how-to-start/</a>, I also found this helpful <a href="https://reactore.com/repository-patterngeneric-dao-implementation-in-scala-using-slick-3/?ref=honstain.com">https://reactore.com/repository-patterngeneric-dao-implementation-in-scala-using-slick-3/</a>.</p><!--kg-card-begin: markdown--><pre><code class="language-scala">import org.slf4j.{Logger, LoggerFactory}
import slick.jdbc.{PostgresProfile, TransactionIsolation}
import slick.jdbc.PostgresProfile.api._

import scala.concurrent.Future
import scala.concurrent.ExecutionContext.Implicits.global

case class InventorySingleRecord(
                                  id: Option[Int],
                                  sku: String,
                                  qty: Int,
                                  location: String
                                )

class InventorySingleRecords(tag: Tag) extends Table[InventorySingleRecord](tag, &quot;inventory_single&quot;) {
  def id = column[Int](&quot;id&quot;, O.PrimaryKey, O.AutoInc)
  def sku = column[String](&quot;sku&quot;)
  def qty = column[Int](&quot;qty&quot;)
  def location = column[String](&quot;location&quot;)
  def * =
    (id.?, sku, qty, location) &lt;&gt; (InventorySingleRecord.tupled, InventorySingleRecord.unapply)
}

object InventorySingleRecordDao extends TableQuery(new InventorySingleRecords(_)) {

  val logger: Logger = LoggerFactory.getLogger(getClass)

  def findAll(db: PostgresProfile.backend.DatabaseDef): Future[Seq[InventorySingleRecord]] = {
    db.run(this.result)
  }
}
</code></pre>
<!--kg-card-end: markdown--><p>This gives us a basic DAO that initial supports just a single query to return every record. Then if we want to retrieve the data we would get a Future for all the records in the database <code>val futureResult = Await.result(singleDAO.findAll(database), Duration.Inf)</code></p><h3 id="testing-the-new-dao">Testing the New DAO</h3><p>One of the benefits of organizing our code this way, is the ability to test our database queries in isolation (something I found helpful while experimenting with Slick).</p><p><strong>WARNING </strong>- the repeated create and drop, is intended to be simple (to understand for the reader) at the cost of being relatively slow and inefficient.</p><p>Before writing our tests, we could use a trait to set up a database just for our testing (and avoid clobbering the database used to run the service).</p><!--kg-card-begin: markdown--><pre><code class="language-scala">import org.scalatest.{BeforeAndAfterAll, Suite}

import slick.dbio.DBIO
import slick.jdbc.PostgresProfile.api._
import scala.concurrent.Await
import scala.concurrent.duration.Duration

trait PostgresSpec extends Suite with BeforeAndAfterAll {

  private val dbName = getClass.getSimpleName.toLowerCase
  private val driver = &quot;org.postgresql.Driver&quot;

  private val postgres = Database.forURL(&quot;jdbc:postgresql://localhost:5432/?user=&lt;TODO-YOUR-USER&gt;&amp;password=&lt;TODO-YOUR-PASSWORD&gt;&quot;, driver = driver)

  def dropDB: DBIO[Int] = sqlu&quot;DROP DATABASE IF EXISTS #$dbName&quot;
  def createDB: DBIO[Int] = sqlu&quot;CREATE DATABASE #$dbName&quot;

  override def beforeAll(): Unit = {
    super.beforeAll()
    Await.result(postgres.run(dropDB), Duration.Inf)
    Await.result(postgres.run(createDB), Duration.Inf)
  }

  override def afterAll() {
    super.afterAll()
    Await.result(postgres.run(dropDB), Duration.Inf)
  }

  val database = Database.forURL(s&quot;jdbc:postgresql://localhost:5432/$dbName?user=&lt;TODO-YOUR-USER&gt;&amp;password=&lt;TODO-YOUR-PASSWORD&gt;&quot;, driver = driver)
}
</code></pre>
<!--kg-card-end: markdown--><p>This has a connection just for creating a special test database and tearing it down, along with a reference <code>database</code> for you to use in your test suite class.</p><!--kg-card-begin: markdown--><pre><code class="language-scala">import org.bitbucket.honstain.PostgresSpec
import org.scalatest.BeforeAndAfter
import org.scalatra.test.scalatest._
import slick.dbio.DBIO
import slick.jdbc.PostgresProfile.api._

import scala.concurrent.Await
import scala.concurrent.duration.Duration


class InventorySingleRecordDaoTests extends ScalatraFunSuite with BeforeAndAfter with PostgresSpec {

  def createInventoryTable: DBIO[Int] =
    sqlu&quot;&quot;&quot;
          CREATE TABLE inventory_single
          (
            id bigserial NOT NULL,
            sku text,
            qty integer,
            location text,
            CONSTRAINT pk_single PRIMARY KEY (id),
            UNIQUE (sku, location)
          );
      &quot;&quot;&quot;
  def dropInventoryTable: DBIO[Int] =
    sqlu&quot;&quot;&quot;
          DROP TABLE IF EXISTS inventory_single;
      &quot;&quot;&quot;

  before {
    Await.result(database.run(createInventoryTable), Duration.Inf)
  }

  after {
    Await.result(database.run(dropInventoryTable), Duration.Inf)
  }

  val TEST_SKU = &quot;NewSku&quot;
  val BIN_01 = &quot;Bin-01&quot;
  val BIN_02 = &quot;Bin-02&quot;

  test(&quot;findAll&quot;) {
    val futureFind = InventorySingleRecordDao.findAll(database)
    val findResult: Seq[InventorySingleRecord] = Await.result(futureFind, Duration.Inf)

    findResult should equal(List())
  }
}
</code></pre>
<!--kg-card-end: markdown--><h3 id="create-new-inventory">Create New Inventory</h3><p>Now that we have a very basic schema for the DB, we want to be able to create new records and inventory. Many of the Slick 3.0 examples I have seen only address the basic insert. We would like to go a bit further and support create and update logic, giving the logic that uses this DAO call the ability to create/destroy/modify inventory levels for a given location and SKU. But to start with let&apos;s do the most basic thing and build up.</p><p>Our first test could look something like this (note that the Slick documentation can be a good additional reference <a href="http://slick.lightbend.com/doc/3.3.0/queries.html?ref=honstain.com#inserting">http://slick.lightbend.com/doc/3.3.0/queries.html#inserting</a>)</p><!--kg-card-begin: markdown--><pre><code class="language-scala">    test(&quot;create single record and use DAO to validate&quot;) {
      val future = InventorySingleRecordDao.create(database, TEST_SKU, 1, BIN_01)
      val result: Int = Await.result(future, Duration.Inf)
      // The expected result is just a count of the number of rows impacted.
      result should equal(1)

      // Validate that changes were persisted, in this case we will use a DAO
      // function we previously created to help us validate our new one.
      val futureFind = InventorySingleRecordDao.findAll(database)
      val findResult: Seq[InventorySingleRecord] = Await.result(futureFind, Duration.Inf)
      findResult should contain only InventorySingleRecord(Some(1), TEST_SKU, 1, BIN_01)
    }
</code></pre>
<!--kg-card-end: markdown--><p>If you would prefer not to compose elements of the DAO in tests (to use with setup and decomposition), you could do the following by using new Slick queries for the test validation (inspecting the changes to the database).</p><!--kg-card-begin: markdown--><pre><code class="language-scala">    test(&quot;create single record and check slick query&quot;) {
      val future = InventorySingleRecordDao.create(database, TEST_SKU, 1, BIN_01)
      val result: Int = Await.result(future, Duration.Inf)
      // The expected result is just a count of the number of rows impacted.
      result should equal(1)

      // Validate that changes were persisted
      val inventoryTable = TableQuery[InventorySingleRecords]
      val futureFind = database.run(inventoryTable.result)
      val findResult: Seq[InventorySingleRecord] = Await.result(futureFind, Duration.Inf)
      findResult should contain only InventorySingleRecord(Some(1), TEST_SKU, 1, BIN_01)
    }
</code></pre>
<!--kg-card-end: markdown--><p>We can then add the following create method to our DAO</p><!--kg-card-begin: markdown--><pre><code class="language-scala">  def create(db: PostgresProfile.backend.DatabaseDef,
               sku: String,
               qty: Int,
               location: String
              ): Future[Int] = {
    val query = TableQuery[InventorySingleRecords] += InventorySingleRecord(Option.empty, sku, qty, location)
    db.run(query)
  }
</code></pre>
<!--kg-card-end: markdown--><p>This is a good starting point, but you will notice that it is fairly restrictive, we would only be able to create records if none existed (because of the unique constraint we previously placed on inventory and SKU columns). I would suggest experimenting with a test to prove this to yourself.</p><!--kg-card-begin: markdown--><pre><code class="language-scala">  test(&quot;create when unique constraint violated&quot;) {
    val resultFirst: Int = Await.result(
      InventorySingleRecordDao.create(database, TEST_SKU, 1, BIN_01),
      Duration.Inf
    )
    resultFirst should equal(1)

    // This naive test, will result in a PSQLException
    Await.result(
      InventorySingleRecordDao.create(database, TEST_SKU, 1, BIN_01),
      Duration.Inf
    )
  }
</code></pre>
<!--kg-card-end: markdown--><p>Unfortunately, the stack trace I got, was not very helpful in understanding where the problem occurred in our code base, but we can improve on error handling later.</p><!--kg-card-begin: markdown--><pre><code class="language-bash">ERROR: duplicate key value violates unique constraint &quot;inventory_single_sku_location_key&quot;
  Detail: Key (sku, location)=(NewSku, Bin-01) already exists.
org.postgresql.util.PSQLException: ERROR: duplicate key value violates unique constraint &quot;inventory_single_sku_location_key&quot;
  Detail: Key (sku, location)=(NewSku, Bin-01) already exists.
	at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2440)
	at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2183)
	at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:308)
	at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:441)
	at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:365)
	at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:143)
	at org.postgresql.jdbc.PgPreparedStatement.executeUpdate(PgPreparedStatement.java:120)
	at slick.jdbc.JdbcActionComponent$InsertActionComposerImpl$SingleInsertAction.$anonfun$run$15(JdbcActionComponent.scala:522)
	at slick.jdbc.JdbcBackend$SessionDef.withPreparedStatement(JdbcBackend.scala:425)
	at slick.jdbc.JdbcBackend$SessionDef.withPreparedStatement$(JdbcBackend.scala:420)
	at slick.jdbc.JdbcBackend$BaseSession.withPreparedStatement(JdbcBackend.scala:489)
	at slick.jdbc.JdbcActionComponent$InsertActionComposerImpl.preparedInsert(JdbcActionComponent.scala:513)
	at slick.jdbc.JdbcActionComponent$InsertActionComposerImpl$SingleInsertAction.run(JdbcActionComponent.scala:519)
	at slick.jdbc.JdbcActionComponent$SimpleJdbcProfileAction.run(JdbcActionComponent.scala:30)
	at slick.jdbc.JdbcActionComponent$SimpleJdbcProfileAction.run(JdbcActionComponent.scala:27)
	at slick.basic.BasicBackend$DatabaseDef$$anon$3.liftedTree1$1(BasicBackend.scala:275)
	at slick.basic.BasicBackend$DatabaseDef$$anon$3.run(BasicBackend.scala:275)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
</code></pre>
<!--kg-card-end: markdown--><h3 id="create-with-update">Create with Update</h3><p>Now we want to take our current create logic and to go one step further, allowing the caller to also update the volume of an existing record (upsert). Let&apos;s start by modifying our test to expect this new behavior.</p><!--kg-card-begin: markdown--><pre><code class="language-Scala">  test(&quot;create with update&quot;) {
    val future = InventorySingleRecordDao.create(database, TEST_SKU, 1, BIN_01)
    Await.result(future, Duration.Inf)

    val futureUpdate = InventorySingleRecordDao.create(database, TEST_SKU, 3, BIN_01)
    val resultUpdate: Int = Await.result(futureUpdate, Duration.Inf)
    resultUpdate should equal(1)

    // Validate that changes were persisted
    val inventoryTable = TableQuery[InventorySingleRecords]
    val futureFind = database.run(inventoryTable.result)
    val findResult: Seq[InventorySingleRecord] = Await.result(futureFind, Duration.Inf)
    findResult should contain only InventorySingleRecord(Some(1), TEST_SKU, 3, BIN_01)
  }
</code></pre>
<!--kg-card-end: markdown--><p>With the test in place, we can implement our changes. Before we implement the Slick query it is worth reviewing what our specific database supports. In our case, PostgreSQL <a href="https://www.postgresql.org/docs/9.5/sql-insert.html?ref=honstain.com">https://www.postgresql.org/docs/9.5/sql-insert.html</a> supports upsert behavior with the &quot;ON CONFLICT&quot; clause. We could use a SQL query like this to get an atomic upsert.</p><!--kg-card-begin: markdown--><pre><code class="language-sql">INSERT INTO inventory_single (sku, qty, location)
VALUES (&apos;SKU-01&apos;, 3, &apos;LOC-01&apos;)
ON CONFLICT ON CONSTRAINT inventory_single_sku_location_key
DO UPDATE SET qty = EXCLUDED.qty;
</code></pre>
<!--kg-card-end: markdown--><p>The corresponding raw SQL implemented in Slick (this is a useful reference <a href="http://slick.lightbend.com/doc/3.3.0/sql.html?ref=honstain.com">http://slick.lightbend.com/doc/3.3.0/sql.html</a>) would be:</p><!--kg-card-begin: markdown--><pre><code class="language-scala">  def create(db: PostgresProfile.backend.DatabaseDef,
               sku: String,
               qty: Int,
               location: String
              ): Future[Int] = {
    //val query = TableQuery[InventorySingleRecords] += InventorySingleRecord(Option.empty, sku, qty, location)
    val query: DBIO[Int] =
      sqlu&quot;&quot;&quot;
           INSERT INTO inventory_single (sku, qty, location)
           VALUES ($sku, $qty, $location)
           ON CONFLICT ON CONSTRAINT inventory_single_sku_location_key
              DO UPDATE SET qty = EXCLUDED.qty;
        &quot;&quot;&quot;
    db.run(query)
  }
</code></pre>
<!--kg-card-end: markdown--><p>That now gives us the ability to create/update quantities for a SKU and location while working within the existing constraints (we said the sku,location needed to be unique). You may not prefer the raw SQL, but we will explore alternatives in the next section.</p><h3 id="create-update-and-return-the-new-updated-record">Create/Update and Return the New/Updated Record</h3><p>The create DAO only returns an Int to indicate if a row was modified, we would now like to return the new value of the record pending the create/update. We can modify our SQL by adding the <code>RETURNING</code> clause to the query.</p><!--kg-card-begin: markdown--><pre><code class="language-sql">INSERT INTO inventory_single (sku, qty, location)
VALUES (&apos;SKU-01&apos;, 3, &apos;LOC-01&apos;)
ON CONFLICT ON CONSTRAINT inventory_single_sku_location_key
DO UPDATE SET qty = EXCLUDED.qty
RETURNING sku, qty, location;
</code></pre>
<!--kg-card-end: markdown--><p>Which means we then need to update the types in our DAO and it&apos;s tests.</p><!--kg-card-begin: markdown--><pre><code class="language-scala">  test(&quot;create&quot;) {
    val future = InventorySingleRecordDao.create(database, TEST_SKU, 1, BIN_01)
    val result: Seq[(String, Int, String)] = Await.result(future, Duration.Inf)
    // The expected result is just a count of the number of rows impacted.
    result should contain only((TEST_SKU, 1, BIN_01))

    // Validate that changes were persisted
    val inventoryTable = TableQuery[InventorySingleRecords]
    val futureFind = database.run(inventoryTable.result)
    val findResult: Seq[InventorySingleRecord] = Await.result(futureFind, Duration.Inf)
    findResult should contain only InventorySingleRecord(Some(1), TEST_SKU, 1, BIN_01)
  }

  test(&quot;create with update&quot;) {
    val future = InventorySingleRecordDao.create(database, TEST_SKU, 1, BIN_01)
    Await.result(future, Duration.Inf)

    val futureUpdate = InventorySingleRecordDao.create(database, TEST_SKU, 3, BIN_01)
    val resultUpdate: Seq[(String, Int, String)] = Await.result(futureUpdate, Duration.Inf)
    resultUpdate should contain only((TEST_SKU, 3, BIN_01))

    // Validate that changes were persisted
    val inventoryTable = TableQuery[InventorySingleRecords]
    val futureFind = database.run(inventoryTable.result)
    val findResult: Seq[InventorySingleRecord] = Await.result(futureFind, Duration.Inf)
    findResult should contain only InventorySingleRecord(Some(1), TEST_SKU, 3, BIN_01)
  }
</code></pre>
<!--kg-card-end: markdown--><p>The create query in the DAO then becomes:</p><!--kg-card-begin: markdown--><pre><code class="language-scala">  def create(db: PostgresProfile.backend.DatabaseDef,
               sku: String,
               qty: Int,
               location: String
              ): Future[Seq[(String, Int, String)]] = {
    val query: DBIO[Seq[(String, Int, String)]] =
      sql&quot;&quot;&quot;
           INSERT INTO inventory_single (sku, qty, location)
           VALUES ($sku, $qty, $location)
           ON CONFLICT ON CONSTRAINT inventory_single_sku_location_key
              DO UPDATE SET qty = EXCLUDED.qty
           RETURNING sku, qty, location;
        &quot;&quot;&quot;.as[(String, Int, String)]
    db.run(query)
  }
</code></pre>
<!--kg-card-end: markdown--><p>The astute reader will notice that we no longer have a mapping to the <code>InventorySingleRecord</code> case class that we set up. We started by just passing back a tuple of (String, Int, String) to represent the record.</p><h3 id="summary">Summary</h3><p>We made it as far as creating the schema, reading from the DB and doing some progressively complicated upsert behavior. In the next posts, we will expand on this functionality and attempt to transfer inventory.</p>]]></content:encoded></item><item><title><![CDATA[Scalatra Inventory Management Service]]></title><description><![CDATA[<h2 id="overview">Overview </h2><p>In our previous set of posts, we added progressively more functionality to our basic Scalatra service. </p><ul><li><a href="https://honstain.com/scalatra-giter8/">Creating a Scalatra service</a> </li><li><a href="https://honstain.com/rest-in-a-scalatra-service/">Scalatra and REST</a></li><li><a href="https://honstain.com/scalatra-2-6-4-postgresql/">Scalatra with Slick and PostgreSQL</a></li></ul><p>Now we would like to explore implementing a system for tracking inventory. What you will get from this post:</p><ul><li>Creating a</li></ul>]]></description><link>https://honstain.com/scalatra-inventory-management-service/</link><guid isPermaLink="false">65b526ba7a5d430e36b8ebff</guid><category><![CDATA[Scala]]></category><category><![CDATA[Scalatra]]></category><category><![CDATA[PostgreSQL]]></category><category><![CDATA[Slick]]></category><dc:creator><![CDATA[Anthony Honstain]]></dc:creator><pubDate>Sat, 16 Mar 2019 19:57:14 GMT</pubDate><media:content url="https://honstain.com/content/images/2019/03/single_record_banner.JPG" medium="image"/><content:encoded><![CDATA[<h2 id="overview">Overview </h2><img src="https://honstain.com/content/images/2019/03/single_record_banner.JPG" alt="Scalatra Inventory Management Service"><p>In our previous set of posts, we added progressively more functionality to our basic Scalatra service. </p><ul><li><a href="https://honstain.com/scalatra-giter8/">Creating a Scalatra service</a> </li><li><a href="https://honstain.com/rest-in-a-scalatra-service/">Scalatra and REST</a></li><li><a href="https://honstain.com/scalatra-2-6-4-postgresql/">Scalatra with Slick and PostgreSQL</a></li></ul><p>Now we would like to explore implementing a system for tracking inventory. What you will get from this post:</p><ul><li>Creating a DB model for a simplified service (tracking inventory)</li><li>Using Slick to query that data</li><li>Using Slick to insert data</li><li>Using Slick and PostgreSQL to upsert</li><li>Tests for the new logic.</li></ul><h2 id="creating-a-service-to-track-inventory-levels">Creating a Service to Track Inventory Levels</h2><h3 id="create-the-database-schema">Create the Database Schema</h3><p>Let&apos;s start with a very basic schema that can track a product/SKU (if you are unfamiliar with the SKU terminology I recommend reading <a href="https://en.wikipedia.org/wiki/Stock_keeping_unit?ref=honstain.com">https://en.wikipedia.org/wiki/Stock_keeping_unit</a>), a location, and a quantity. A location could have multiple SKU&apos;s in it, and a SKU can be in more than one location.</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2019/03/image.png" class="kg-image" alt="Scalatra Inventory Management Service" loading="lazy"></figure><!--kg-card-begin: markdown--><pre><code class="language-SQL">CREATE TABLE inventory_single
(
  id bigserial NOT NULL,
  sku text,
  qty integer,
  location text,
  CONSTRAINT pk_single PRIMARY KEY (id),
  UNIQUE (sku, location)
);
</code></pre>
<!--kg-card-end: markdown--><p>We can create a few example pieces of inventory:</p><!--kg-card-begin: markdown--><pre><code class="language-SQL">INSERT INTO inventory_single(sku, qty, location) VALUES
(&apos;SKU-01&apos;, 2, &apos;LOC-01&apos;),
(&apos;SKU-01&apos;, 0, &apos;LOC-02&apos;)
;
</code></pre>
<!--kg-card-end: markdown--><h3 id="create-a-dao-for-slick">Create a DAO for Slick</h3><p>Now that we have our database schema, lets set up our Scala code to access it. I have chosen here to abstract my database access and Slick queries with DAO object. I referred to this when trying to organize my DAO <a href="https://sap1ens.com/blog/2015/07/26/scala-slick-3-how-to-start/?ref=honstain.com">https://sap1ens.com/blog/2015/07/26/scala-slick-3-how-to-start/</a>, I also found this helpful <a href="https://reactore.com/repository-patterngeneric-dao-implementation-in-scala-using-slick-3/?ref=honstain.com">https://reactore.com/repository-patterngeneric-dao-implementation-in-scala-using-slick-3/</a>.</p><!--kg-card-begin: markdown--><pre><code class="language-scala">import org.slf4j.{Logger, LoggerFactory}
import slick.jdbc.{PostgresProfile, TransactionIsolation}
import slick.jdbc.PostgresProfile.api._

import scala.concurrent.Future
import scala.concurrent.ExecutionContext.Implicits.global

case class InventorySingleRecord(
                                  id: Option[Int],
                                  sku: String,
                                  qty: Int,
                                  location: String
                                )

class InventorySingleRecords(tag: Tag) extends Table[InventorySingleRecord](tag, &quot;inventory_single&quot;) {
  def id = column[Int](&quot;id&quot;, O.PrimaryKey, O.AutoInc)
  def sku = column[String](&quot;sku&quot;)
  def qty = column[Int](&quot;qty&quot;)
  def location = column[String](&quot;location&quot;)
  def * =
    (id.?, sku, qty, location) &lt;&gt; (InventorySingleRecord.tupled, InventorySingleRecord.unapply)
}

object InventorySingleRecordDao extends TableQuery(new InventorySingleRecords(_)) {

  val logger: Logger = LoggerFactory.getLogger(getClass)

  def findAll(db: PostgresProfile.backend.DatabaseDef): Future[Seq[InventorySingleRecord]] = {
    db.run(this.result)
  }
}
</code></pre>
<!--kg-card-end: markdown--><p>This gives us a basic DAO that initial supports just a single query to return every record. Then if we want to retrieve the data we would get a Future for all the records in the database <code>val futureResult = Await.result(singleDAO.findAll(database), Duration.Inf)</code></p><h3 id="testing-the-new-dao">Testing the New DAO</h3><p>One of the benefits of organizing our code this way, is the ability to test our database queries in isolation (something I found helpful while experimenting with Slick).</p><p><strong>WARNING </strong>- the repeated create and drop, is intended to be simple (to understand for the reader) at the cost of being relatively slow and inefficient.</p><p>Before writing our tests, we could use a trait to set up a database just for our testing (and avoid clobbering the database used to run the service).</p><!--kg-card-begin: markdown--><pre><code class="language-scala">import org.scalatest.{BeforeAndAfterAll, Suite}

import slick.dbio.DBIO
import slick.jdbc.PostgresProfile.api._
import scala.concurrent.Await
import scala.concurrent.duration.Duration

trait PostgresSpec extends Suite with BeforeAndAfterAll {

  private val dbName = getClass.getSimpleName.toLowerCase
  private val driver = &quot;org.postgresql.Driver&quot;

  private val postgres = Database.forURL(&quot;jdbc:postgresql://localhost:5432/?user=&lt;TODO-YOUR-USER&gt;&amp;password=&lt;TODO-YOUR-PASSWORD&gt;&quot;, driver = driver)

  def dropDB: DBIO[Int] = sqlu&quot;DROP DATABASE IF EXISTS #$dbName&quot;
  def createDB: DBIO[Int] = sqlu&quot;CREATE DATABASE #$dbName&quot;

  override def beforeAll(): Unit = {
    super.beforeAll()
    Await.result(postgres.run(dropDB), Duration.Inf)
    Await.result(postgres.run(createDB), Duration.Inf)
  }

  override def afterAll() {
    super.afterAll()
    Await.result(postgres.run(dropDB), Duration.Inf)
  }

  val database = Database.forURL(s&quot;jdbc:postgresql://localhost:5432/$dbName?user=&lt;TODO-YOUR-USER&gt;&amp;password=&lt;TODO-YOUR-PASSWORD&gt;&quot;, driver = driver)
}
</code></pre>
<!--kg-card-end: markdown--><p>This has a connection just for creating a special test database and tearing it down, along with a reference <code>database</code> for you to use in your test suite class.</p><!--kg-card-begin: markdown--><pre><code class="language-scala">import org.bitbucket.honstain.PostgresSpec
import org.scalatest.BeforeAndAfter
import org.scalatra.test.scalatest._
import slick.dbio.DBIO
import slick.jdbc.PostgresProfile.api._

import scala.concurrent.Await
import scala.concurrent.duration.Duration


class InventorySingleRecordDaoTests extends ScalatraFunSuite with BeforeAndAfter with PostgresSpec {

  def createInventoryTable: DBIO[Int] =
    sqlu&quot;&quot;&quot;
          CREATE TABLE inventory_single
          (
            id bigserial NOT NULL,
            sku text,
            qty integer,
            location text,
            CONSTRAINT pk_single PRIMARY KEY (id),
            UNIQUE (sku, location)
          );
      &quot;&quot;&quot;
  def dropInventoryTable: DBIO[Int] =
    sqlu&quot;&quot;&quot;
          DROP TABLE IF EXISTS inventory_single;
      &quot;&quot;&quot;

  before {
    Await.result(database.run(createInventoryTable), Duration.Inf)
  }

  after {
    Await.result(database.run(dropInventoryTable), Duration.Inf)
  }

  val TEST_SKU = &quot;NewSku&quot;
  val BIN_01 = &quot;Bin-01&quot;
  val BIN_02 = &quot;Bin-02&quot;

  test(&quot;findAll&quot;) {
    val futureFind = InventorySingleRecordDao.findAll(database)
    val findResult: Seq[InventorySingleRecord] = Await.result(futureFind, Duration.Inf)

    findResult should equal(List())
  }
}
</code></pre>
<!--kg-card-end: markdown--><h3 id="create-new-inventory">Create New Inventory</h3><p>Now that we have a very basic schema for the DB, we want to be able to create new records and inventory. Many of the Slick 3.0 examples I have seen only address the basic insert. We would like to go a bit further and support create and update logic, giving the logic that uses this DAO call the ability to create/destroy/modify inventory levels for a given location and SKU. But to start with let&apos;s do the most basic thing and build up.</p><p>Our first test could look something like this (note that the Slick documentation can be a good additional reference <a href="http://slick.lightbend.com/doc/3.3.0/queries.html?ref=honstain.com#inserting">http://slick.lightbend.com/doc/3.3.0/queries.html#inserting</a>)</p><!--kg-card-begin: markdown--><pre><code class="language-scala">    test(&quot;create single record and use DAO to validate&quot;) {
      val future = InventorySingleRecordDao.create(database, TEST_SKU, 1, BIN_01)
      val result: Int = Await.result(future, Duration.Inf)
      // The expected result is just a count of the number of rows impacted.
      result should equal(1)

      // Validate that changes were persisted, in this case we will use a DAO
      // function we previously created to help us validate our new one.
      val futureFind = InventorySingleRecordDao.findAll(database)
      val findResult: Seq[InventorySingleRecord] = Await.result(futureFind, Duration.Inf)
      findResult should contain only InventorySingleRecord(Some(1), TEST_SKU, 1, BIN_01)
    }
</code></pre>
<!--kg-card-end: markdown--><p>If you would prefer not to compose elements of the DAO in tests (to use with setup and decomposition), you could do the following by using new Slick queries for the test validation (inspecting the changes to the database).</p><!--kg-card-begin: markdown--><pre><code class="language-scala">    test(&quot;create single record and check slick query&quot;) {
      val future = InventorySingleRecordDao.create(database, TEST_SKU, 1, BIN_01)
      val result: Int = Await.result(future, Duration.Inf)
      // The expected result is just a count of the number of rows impacted.
      result should equal(1)

      // Validate that changes were persisted
      val inventoryTable = TableQuery[InventorySingleRecords]
      val futureFind = database.run(inventoryTable.result)
      val findResult: Seq[InventorySingleRecord] = Await.result(futureFind, Duration.Inf)
      findResult should contain only InventorySingleRecord(Some(1), TEST_SKU, 1, BIN_01)
    }
</code></pre>
<!--kg-card-end: markdown--><p>We can then add the following create method to our DAO</p><!--kg-card-begin: markdown--><pre><code class="language-scala">  def create(db: PostgresProfile.backend.DatabaseDef,
               sku: String,
               qty: Int,
               location: String
              ): Future[Int] = {
    val query = TableQuery[InventorySingleRecords] += InventorySingleRecord(Option.empty, sku, qty, location)
    db.run(query)
  }
</code></pre>
<!--kg-card-end: markdown--><p>This is a good starting point, but you will notice that it is fairly restrictive, we would only be able to create records if none existed (because of the unique constraint we previously placed on inventory and SKU columns). I would suggest experimenting with a test to prove this to yourself.</p><!--kg-card-begin: markdown--><pre><code class="language-scala">  test(&quot;create when unique constraint violated&quot;) {
    val resultFirst: Int = Await.result(
      InventorySingleRecordDao.create(database, TEST_SKU, 1, BIN_01),
      Duration.Inf
    )
    resultFirst should equal(1)

    // This naive test, will result in a PSQLException
    Await.result(
      InventorySingleRecordDao.create(database, TEST_SKU, 1, BIN_01),
      Duration.Inf
    )
  }
</code></pre>
<!--kg-card-end: markdown--><p>Unfortunately, the stack trace I got, was not very helpful in understanding where the problem occurred in our code base, but we can improve on error handling later.</p><!--kg-card-begin: markdown--><pre><code class="language-bash">ERROR: duplicate key value violates unique constraint &quot;inventory_single_sku_location_key&quot;
  Detail: Key (sku, location)=(NewSku, Bin-01) already exists.
org.postgresql.util.PSQLException: ERROR: duplicate key value violates unique constraint &quot;inventory_single_sku_location_key&quot;
  Detail: Key (sku, location)=(NewSku, Bin-01) already exists.
	at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2440)
	at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2183)
	at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:308)
	at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:441)
	at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:365)
	at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:143)
	at org.postgresql.jdbc.PgPreparedStatement.executeUpdate(PgPreparedStatement.java:120)
	at slick.jdbc.JdbcActionComponent$InsertActionComposerImpl$SingleInsertAction.$anonfun$run$15(JdbcActionComponent.scala:522)
	at slick.jdbc.JdbcBackend$SessionDef.withPreparedStatement(JdbcBackend.scala:425)
	at slick.jdbc.JdbcBackend$SessionDef.withPreparedStatement$(JdbcBackend.scala:420)
	at slick.jdbc.JdbcBackend$BaseSession.withPreparedStatement(JdbcBackend.scala:489)
	at slick.jdbc.JdbcActionComponent$InsertActionComposerImpl.preparedInsert(JdbcActionComponent.scala:513)
	at slick.jdbc.JdbcActionComponent$InsertActionComposerImpl$SingleInsertAction.run(JdbcActionComponent.scala:519)
	at slick.jdbc.JdbcActionComponent$SimpleJdbcProfileAction.run(JdbcActionComponent.scala:30)
	at slick.jdbc.JdbcActionComponent$SimpleJdbcProfileAction.run(JdbcActionComponent.scala:27)
	at slick.basic.BasicBackend$DatabaseDef$$anon$3.liftedTree1$1(BasicBackend.scala:275)
	at slick.basic.BasicBackend$DatabaseDef$$anon$3.run(BasicBackend.scala:275)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
</code></pre>
<!--kg-card-end: markdown--><h3 id="create-with-update">Create with Update</h3><p>Now we want to take our current create logic and to go one step further, allowing the caller to also update the volume of an existing record (upsert). Let&apos;s start by modifying our test to expect this new behavior.</p><!--kg-card-begin: markdown--><pre><code class="language-Scala">  test(&quot;create with update&quot;) {
    val future = InventorySingleRecordDao.create(database, TEST_SKU, 1, BIN_01)
    Await.result(future, Duration.Inf)

    val futureUpdate = InventorySingleRecordDao.create(database, TEST_SKU, 3, BIN_01)
    val resultUpdate: Int = Await.result(futureUpdate, Duration.Inf)
    resultUpdate should equal(1)

    // Validate that changes were persisted
    val inventoryTable = TableQuery[InventorySingleRecords]
    val futureFind = database.run(inventoryTable.result)
    val findResult: Seq[InventorySingleRecord] = Await.result(futureFind, Duration.Inf)
    findResult should contain only InventorySingleRecord(Some(1), TEST_SKU, 3, BIN_01)
  }
</code></pre>
<!--kg-card-end: markdown--><p>With the test in place, we can implement our changes. Before we implement the Slick query it is worth reviewing what our specific database supports. In our case, PostgreSQL <a href="https://www.postgresql.org/docs/9.5/sql-insert.html?ref=honstain.com">https://www.postgresql.org/docs/9.5/sql-insert.html</a> supports upsert behavior with the &quot;ON CONFLICT&quot; clause. We could use a SQL query like this to get an atomic upsert.</p><!--kg-card-begin: markdown--><pre><code class="language-sql">INSERT INTO inventory_single (sku, qty, location)
VALUES (&apos;SKU-01&apos;, 3, &apos;LOC-01&apos;)
ON CONFLICT ON CONSTRAINT inventory_single_sku_location_key
DO UPDATE SET qty = EXCLUDED.qty;
</code></pre>
<!--kg-card-end: markdown--><p>The corresponding raw SQL implemented in Slick (this is a useful reference <a href="http://slick.lightbend.com/doc/3.3.0/sql.html?ref=honstain.com">http://slick.lightbend.com/doc/3.3.0/sql.html</a>) would be:</p><!--kg-card-begin: markdown--><pre><code class="language-scala">  def create(db: PostgresProfile.backend.DatabaseDef,
               sku: String,
               qty: Int,
               location: String
              ): Future[Int] = {
    //val query = TableQuery[InventorySingleRecords] += InventorySingleRecord(Option.empty, sku, qty, location)
    val query: DBIO[Int] =
      sqlu&quot;&quot;&quot;
           INSERT INTO inventory_single (sku, qty, location)
           VALUES ($sku, $qty, $location)
           ON CONFLICT ON CONSTRAINT inventory_single_sku_location_key
              DO UPDATE SET qty = EXCLUDED.qty;
        &quot;&quot;&quot;
    db.run(query)
  }
</code></pre>
<!--kg-card-end: markdown--><p>That now gives us the ability to create/update quantities for a SKU and location while working within the existing constraints (we said the sku,location needed to be unique). You may not prefer the raw SQL, but we will explore alternatives in the next section.</p><h3 id="create-update-and-return-the-new-updated-record">Create/Update and Return the New/Updated Record</h3><p>The create DAO only returns an Int to indicate if a row was modified, we would now like to return the new value of the record pending the create/update. We can modify our SQL by adding the <code>RETURNING</code> clause to the query.</p><!--kg-card-begin: markdown--><pre><code class="language-sql">INSERT INTO inventory_single (sku, qty, location)
VALUES (&apos;SKU-01&apos;, 3, &apos;LOC-01&apos;)
ON CONFLICT ON CONSTRAINT inventory_single_sku_location_key
DO UPDATE SET qty = EXCLUDED.qty
RETURNING sku, qty, location;
</code></pre>
<!--kg-card-end: markdown--><p>Which means we then need to update the types in our DAO and it&apos;s tests.</p><!--kg-card-begin: markdown--><pre><code class="language-scala">  test(&quot;create&quot;) {
    val future = InventorySingleRecordDao.create(database, TEST_SKU, 1, BIN_01)
    val result: Seq[(String, Int, String)] = Await.result(future, Duration.Inf)
    // The expected result is just a count of the number of rows impacted.
    result should contain only((TEST_SKU, 1, BIN_01))

    // Validate that changes were persisted
    val inventoryTable = TableQuery[InventorySingleRecords]
    val futureFind = database.run(inventoryTable.result)
    val findResult: Seq[InventorySingleRecord] = Await.result(futureFind, Duration.Inf)
    findResult should contain only InventorySingleRecord(Some(1), TEST_SKU, 1, BIN_01)
  }

  test(&quot;create with update&quot;) {
    val future = InventorySingleRecordDao.create(database, TEST_SKU, 1, BIN_01)
    Await.result(future, Duration.Inf)

    val futureUpdate = InventorySingleRecordDao.create(database, TEST_SKU, 3, BIN_01)
    val resultUpdate: Seq[(String, Int, String)] = Await.result(futureUpdate, Duration.Inf)
    resultUpdate should contain only((TEST_SKU, 3, BIN_01))

    // Validate that changes were persisted
    val inventoryTable = TableQuery[InventorySingleRecords]
    val futureFind = database.run(inventoryTable.result)
    val findResult: Seq[InventorySingleRecord] = Await.result(futureFind, Duration.Inf)
    findResult should contain only InventorySingleRecord(Some(1), TEST_SKU, 3, BIN_01)
  }
</code></pre>
<!--kg-card-end: markdown--><p>The create query in the DAO then becomes:</p><!--kg-card-begin: markdown--><pre><code class="language-scala">  def create(db: PostgresProfile.backend.DatabaseDef,
               sku: String,
               qty: Int,
               location: String
              ): Future[Seq[(String, Int, String)]] = {
    val query: DBIO[Seq[(String, Int, String)]] =
      sql&quot;&quot;&quot;
           INSERT INTO inventory_single (sku, qty, location)
           VALUES ($sku, $qty, $location)
           ON CONFLICT ON CONSTRAINT inventory_single_sku_location_key
              DO UPDATE SET qty = EXCLUDED.qty
           RETURNING sku, qty, location;
        &quot;&quot;&quot;.as[(String, Int, String)]
    db.run(query)
  }
</code></pre>
<!--kg-card-end: markdown--><p>The astute reader will notice that we no longer have a mapping to the <code>InventorySingleRecord</code> case class that we set up. We started by just passing back a tuple of (String, Int, String) to represent the record.</p><h3 id="summary">Summary</h3><p>We made it as far as creating the schema, reading from the DB and doing some progressively complicated upsert behavior. In the next posts, we will expand on this functionality and attempt to transfer inventory.</p>]]></content:encoded></item><item><title><![CDATA[Scalatra 2.6.4 with Slick and PostgreSQL]]></title><description><![CDATA[Adding support for PostgreSQL to a Scala Scalatra REST service.]]></description><link>https://honstain.com/scalatra-2-6-4-postgresql-2/</link><guid isPermaLink="false">65b52aaf7a5d430e36b8ec87</guid><category><![CDATA[Scala]]></category><category><![CDATA[Scalatra]]></category><category><![CDATA[PostgreSQL]]></category><dc:creator><![CDATA[Anthony Honstain]]></dc:creator><pubDate>Sat, 09 Feb 2019 21:43:37 GMT</pubDate><media:content url="https://honstain.com/content/images/2019/02/scalatra_database_intelliJ.JPG" medium="image"/><content:encoded><![CDATA[<h2 id="overview">Overview</h2><img src="https://honstain.com/content/images/2019/02/scalatra_database_intelliJ.JPG" alt="Scalatra 2.6.4 with Slick and PostgreSQL"><p>Building on the last several posts (creating a <a href="https://honstain.com/scalatra-giter8/">Scalatra service</a> and <a href="https://honstain.com/rest-in-a-scalatra-service/">supporting REST</a>), we would now like to add support for a database. I have chosen to use PostgreSQL for this guide.</p><p>Versions being used in this guide:</p><ul><li>Scalatra version 2.6.4 <a href="http://scalatra.org/?ref=honstain.com">http://scalatra.org/</a> </li><li>Scala version 2.12.6 <a href="https://www.scala-lang.org/?ref=honstain.com">https://www.scala-lang.org/</a></li><li>PostgreSQL 10.6 <code>PostgreSQL 10.6 (Ubuntu 10.6-0ubuntu0.18.10.1) on x86_64-pc-linux-gnu</code> installed locally on Ubuntu 18.10 using <code>sudo apt install postgresql</code> <a href="https://www.postgresql.org/?ref=honstain.com">https://www.postgresql.org/</a></li><li>Ubuntu 18.10 <a href="http://releases.ubuntu.com/18.10/?ref=honstain.com">http://releases.ubuntu.com/18.10/</a> (I tend to run this in VMware Workstation 15 Player for convenience)</li></ul><p>Assumptions:</p><ul><li>You already have a basic Scalatra service started.</li></ul><h2 id="details">Details</h2><p>You can start with the official Scalatra guide for integrating with a persistence framework and the manual for Slick:</p><ul><li><a href="http://scalatra.org//guides/2.6/persistence/introduction.html?ref=honstain.com">http://scalatra.org//guides/2.6/persistence/introduction.html</a></li><li><a href="http://scalatra.org/guides/2.6/persistence/slick.html?ref=honstain.com">http://scalatra.org/guides/2.6/persistence/slick.html</a></li></ul><p>I wanted to start with Slick based on the positive things I had heard about it from my peers more knowledgeable in Scala. However I wanted to start with PostgreSQL instead of H2 (as I was interested in being able to do some naive bench-marking and eventually run the service in Heroku).</p><h3 id="getting-your-database-running">Getting Your Database Running</h3><p>If you prefer to run Docker or have an alternative approach, feel free to skip this section.</p><p>This may not be the right path for you, this is optimized for local development and direct administration of the DB. It is <strong>NOT SECURE</strong>, and <strong>NOT MEANT FOR PRODUCTION</strong>.</p><ul><li>Install PostgreSQL - I have opted to use the Ubuntu package <code>sudo apt install postgresql</code>. I found this resource helpful (DigitalOcean produces some very helpful guides) <a href="https://www.digitalocean.com/community/tutorials/how-to-install-and-use-postgresql-on-ubuntu-18-04?ref=honstain.com">https://www.digitalocean.com/community/tutorials/how-to-install-and-use-postgresql-on-ubuntu-18-04</a></li><li>Create a user and a new DB</li></ul><!--kg-card-begin: markdown--><pre><code class="language-bash">sudo -i -u postgres
&gt; createuser --interactive
&gt; createdb toyinventory
</code></pre>
<!--kg-card-end: markdown--><ul><li>Allow access to the local database via trust authentication (I am using this because I have a single-user workstation for development - PostgreSQL assumes anyone who can connect is authorized with whatever user they want). Some additional references if your interested <a href="https://www.postgresql.org/docs/9.1/auth-pg-hba-conf.html?ref=honstain.com">https://www.postgresql.org/docs/9.1/auth-pg-hba-conf.html</a> and <a href="https://www.postgresql.org/docs/9.1/auth-methods.html?ref=honstain.com#AUTH-TRUST">https://www.postgresql.org/docs/9.1/auth-methods.html#AUTH-TRUST</a></li><li>Use the editor of your choice to open your <code>pg_hba.conf</code> file. <code>sudo emacs /etc/postgresql/10/main/pg_hba.conf</code> and set the IPv4 and IPv6 to trust.</li></ul><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2019/02/image-10.png" class="kg-image" alt="Scalatra 2.6.4 with Slick and PostgreSQL" loading="lazy"></figure><ul><li>UPDATE TO GUIDE - I also ended up setting &quot;local&quot; to <code>trust</code> also so that I could easily get assess with psql from the command line</li><li>Restart PostgreSQL <code>sudo service postgresql restart</code></li></ul><!--kg-card-begin: markdown--><p><s>You can connect and interact with the database via psql using the account you previously created. sudo -i -u toyinventory psql.</s></p>
<!--kg-card-end: markdown--><p>You can connect and interact with the database via psql using the account you previously created. <code>psql -U toyinventory</code>. This can be a helpful guide if your PSQL is rusty <a href="http://postgresguide.com/utilities/psql.html?ref=honstain.com">http://postgresguide.com/utilities/psql.html</a></p><p>You can then validate by accessing the database using IntelliJ, as I frequently prefer to execute queries and inspect the database from IntelliJ.</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2019/02/image-11.png" class="kg-image" alt="Scalatra 2.6.4 with Slick and PostgreSQL" loading="lazy"></figure><h3 id="creating-table-and-records">Creating Table and Records</h3><p>You will want a simple table and some records to start working with, for this guide we will avoid going into schema management. First, create your table:</p><!--kg-card-begin: markdown--><pre><code class="language-sql">CREATE TABLE inventory
(
  id bigserial NOT NULL,
  sku text,
  qty integer, -- https://www.postgresql.org/docs/10/datatype-numeric.html
  description text,
  CONSTRAINT pk PRIMARY KEY (id)
)
</code></pre>
<!--kg-card-end: markdown--><p>A little off topic, but some interesting references if you are considering what column type to use for strings with PostgreSQL <a href="https://stackoverflow.com/questions/4848964/postgresql-difference-between-text-and-varchar-character-varying?ref=honstain.com">https://stackoverflow.com/questions/4848964/postgresql-difference-between-text-and-varchar-character-varying</a> and <a href="https://www.depesz.com/2010/03/02/charx-vs-varcharx-vs-varchar-vs-text/?ref=honstain.com">https://www.depesz.com/2010/03/02/charx-vs-varcharx-vs-varchar-vs-text/</a></p><p>Insert some data to work with:</p><!--kg-card-begin: markdown--><pre><code class="language-sql">INSERT INTO inventory(sku, qty, description) VALUES 
(&apos;ZL101&apos;, 1, &apos;Black shoes&apos;), 
(&apos;ZL102&apos;, 0, &apos;Red dress&apos;), 
(&apos;ZL103&apos;, 4, &apos;Block of wood&apos;);
</code></pre>
<!--kg-card-end: markdown--><h3 id="adding-the-necessary-scalatra-dependencies">Adding the Necessary Scalatra Dependencies</h3><p>Now that you have a running database, let&apos;s get Scalatra to talk to it.</p><p>If you started with the Slick documentation <a href="http://slick.lightbend.com/doc/3.3.0/gettingstarted.html?ref=honstain.com">http://slick.lightbend.com/doc/3.3.0/gettingstarted.html</a> you found it also used the H2 database. I found it difficult to follow (but that&apos;s probably my weakness more than anything).</p><p>I added the following dependencies to my projects <code>build.sbt</code> </p><!--kg-card-begin: markdown--><pre><code class="language-scala">libraryDependencies ++= Seq(
  &quot;com.typesafe.slick&quot; %% &quot;slick&quot; % &quot;3.3.0&quot;,
  &quot;org.postgresql&quot; % &quot;postgresql&quot; % &quot;42.2.5&quot;, // org.postgresql.ds.PGSimpleDataSource dependency
)
</code></pre>
<!--kg-card-end: markdown--><ul><li>Slick version 3.3.0 <a href="http://slick.lightbend.com/doc/3.3.0/?ref=honstain.com">http://slick.lightbend.com/doc/3.3.0/</a></li><li>PostgreSQL JDBC <a href="https://github.com/pgjdbc/pgjdbc?ref=honstain.com">https://github.com/pgjdbc/pgjdbc</a> If you went through the Slick documentation you will see that <a href="http://slick.lightbend.com/doc/3.3.0/database.html?ref=honstain.com">http://slick.lightbend.com/doc/3.3.0/database.html</a> recommended version <code>9.4-1206-jdbc42</code> and I have opted to use the most recent version of <code>42.2.5</code>.</li></ul><h3 id="connecting-to-the-database">Connecting to the Database</h3><p>I want to see the code talk to the DB, so I ignored proper management of connections and dependencies to get things started. </p><p>I added the following imports to my Servlet that is responsible for the REST endpoints. NOTE - this was where I got tripped up trying to follow the other guides, there are a number of ways to use this library, and if you let IntelliJ handle the auto-import it is very likely that end up with a confusing mess.</p><!--kg-card-begin: markdown--><pre><code class="language-scala">import slick.jdbc.PostgresProfile.api._

import scala.concurrent.{Await, Future}
import scala.concurrent.duration.Duration
</code></pre>
<!--kg-card-end: markdown--><p>Now specific the code needed to make the database connection, I have opted to retrieve the postgres user and password from environment variables as opposed to checking them in directly to source code.</p><!--kg-card-begin: markdown--><pre><code class="language-scala">val postgres_user = sys.env(&quot;postgres_user&quot;)
val postgres_password = sys.env(&quot;postgres_password&quot;)
val connectionUrl = s&quot;jdbc:postgresql://localhost:5432/toyinventory?user=${postgres_user}&amp;password=${postgres_password}&quot;
</code></pre>
<!--kg-card-end: markdown--><p>Now specific a class to model your database record.</p><!--kg-card-begin: markdown--><pre><code class="language-scala">class InventoryRecord(tag: Tag) extends
  Table[(Int, String, Int, String)](tag, &quot;inventory&quot;) {

  def id = column[Int](&quot;id&quot;)
  def sku = column[String](&quot;sku&quot;)
  def qty = column[Int](&quot;qty&quot;)
  def description = column[String](&quot;description&quot;)

  def * = (id, sku, qty, description)
}
</code></pre>
<!--kg-card-end: markdown--><p>Now your ready to run a SELECT all query using Slick</p><!--kg-card-begin: markdown--><pre><code class="language-scala">val db = Database.forURL(connectionUrl, driver = &quot;org.postgresql.Driver&quot;)

try {
  val users = TableQuery[InventoryRecord]
  val query = users.map(_.sku)
  val action = query.result
  val result: Future[Seq[String]] = db.run(action)
  val futureResult = Await.result(result, Duration.Inf)
  futureResult.map { sku =&gt; logger.debug(s&quot;SKU: ${sku}&quot;) }
} finally db.close
</code></pre>
<!--kg-card-end: markdown--><p>Now that you have all of the pieces, I slammed that code into my GET endpoint that I defined in <a href="https://honstain.com/rest-in-a-scalatra-service/">http://honstain.com/rest-in-a-scalatra-service/</a> and sent some HTTP requests.</p><p>Don&apos;t forget to set your environment variables if you opt to run via IntelliJ </p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2019/02/image-18.png" class="kg-image" alt="Scalatra 2.6.4 with Slick and PostgreSQL" loading="lazy"></figure><!--kg-card-begin: markdown--><pre><code class="language-scala">import org.scalatra._
import org.slf4j.LoggerFactory
// JSON-related libraries
import org.json4s.{DefaultFormats, Formats}
// JSON handling support from Scalatra
import org.scalatra.json._

import slick.jdbc.PostgresProfile.api._

import scala.concurrent.{Await, Future}
import scala.concurrent.duration.Duration
import scala.concurrent.ExecutionContext.Implicits.global

class ToyInventory extends ScalatraServlet with JacksonJsonSupport {

  val logger = LoggerFactory.getLogger(getClass)
  protected implicit val jsonFormats: Formats = DefaultFormats

  before() {
    contentType = formats(&quot;json&quot;)
  }

  get(&quot;/&quot;) {
    val postgres_user = sys.env(&quot;postgres_user&quot;)
    val postgres_password = sys.env(&quot;postgres_password&quot;)
    val connectionUrl = s&quot;jdbc:postgresql://localhost:5432/toyinventory?user=${postgres_user}&amp;password=${postgres_password}&quot;

    val db = Database.forURL(connectionUrl, driver = &quot;org.postgresql.Driver&quot;)

    try {
      val users = TableQuery[InventoryRecord]
      val query = users.map(_.sku)
      val action = query.result
      val result: Future[Seq[String]] = db.run(action)
      val futureResult = Await.result(result, Duration.Inf)
      futureResult.map { sku =&gt; logger.debug(s&quot;SKU: ${sku}&quot;) }
    } finally db.close

    InventoryData.all
  }

  post(&quot;/&quot;) {
    val newInventory = parsedBody.extract[Inventory]
    logger.debug(s&quot;Creating inventory sku:${newInventory.sku}&quot;)
    logger.debug(&quot;Creating inventory {}&quot;, newInventory.toString)
    InventoryData.all = newInventory :: InventoryData.all
    newInventory
  }

}

case class Inventory(sku: String, qty: Int, description: String)

object InventoryData {

  var all = List(
    Inventory(&quot;ZL101&quot;, 1, &quot;Black shoes&quot;),
    Inventory(&quot;ZL102&quot;, 0, &quot;Red dress&quot;),
    Inventory(&quot;ZL103&quot;, 4, &quot;Block of wood&quot;),
  )
}

class InventoryRecord(tag: Tag) extends
  Table[(Int, String, Int, String)](tag, &quot;inventory&quot;) {

  def id = column[Int](&quot;id&quot;)
  def sku = column[String](&quot;sku&quot;)
  def qty = column[Int](&quot;qty&quot;)
  def description = column[String](&quot;description&quot;)

  def * = (id, sku, qty, description)
}
</code></pre>
<!--kg-card-end: markdown--><p>The result of this generated the following logs, Slick generated a significant amount of logging that I thought was very detailed (but would probably immediately trim down).</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2019/02/image-14.png" class="kg-image" alt="Scalatra 2.6.4 with Slick and PostgreSQL" loading="lazy"></figure><h2 id="summary">Summary</h2><p>Now that you have a basic query working, you can start writing more advanced queries. You will also want to start managing your database initialization and connections more appropriately.</p><h3 id="references-i-found-helpful">References I Found Helpful</h3><ul><li>Slick Documentation <a href="http://scalatra.org/guides/2.6/persistence/slick.html?ref=honstain.com">http://scalatra.org/guides/2.6/persistence/slick.html</a></li><li>Slick Queries <a href="http://slick.lightbend.com/doc/3.3.0/queries.html?ref=honstain.com">http://slick.lightbend.com/doc/3.3.0/queries.html</a></li><li>PSQL cheat sheet <a href="http://postgresguide.com/utilities/psql.html?ref=honstain.com">http://postgresguide.com/utilities/psql.html</a></li><li>This guide helped me by demonstrating some basic postgres queries from Slick <a href="http://queirozf.com/entries/scala-slick-simple-example-on-connecting-to-a-postgresql-database?ref=honstain.com">http://queirozf.com/entries/scala-slick-simple-example-on-connecting-to-a-postgresql-database</a></li></ul>]]></content:encoded></item><item><title><![CDATA[Scalatra 2.6.4 with Slick and PostgreSQL]]></title><description><![CDATA[Adding support for PostgreSQL to a Scala Scalatra REST service.]]></description><link>https://honstain.com/scalatra-2-6-4-postgresql/</link><guid isPermaLink="false">65b526ba7a5d430e36b8ebfd</guid><category><![CDATA[Scala]]></category><category><![CDATA[Scalatra]]></category><category><![CDATA[PostgreSQL]]></category><dc:creator><![CDATA[Anthony Honstain]]></dc:creator><pubDate>Sat, 09 Feb 2019 21:43:37 GMT</pubDate><media:content url="https://honstain.com/content/images/2019/02/scalatra_database_intelliJ.JPG" medium="image"/><content:encoded><![CDATA[<h2 id="overview">Overview</h2><img src="https://honstain.com/content/images/2019/02/scalatra_database_intelliJ.JPG" alt="Scalatra 2.6.4 with Slick and PostgreSQL"><p>Building on the last several posts (creating a <a href="https://honstain.com/scalatra-giter8/">Scalatra service</a> and <a href="https://honstain.com/rest-in-a-scalatra-service/">supporting REST</a>), we would now like to add support for a database. I have chosen to use PostgreSQL for this guide.</p><p>Versions being used in this guide:</p><ul><li>Scalatra version 2.6.4 <a href="http://scalatra.org/?ref=honstain.com">http://scalatra.org/</a> </li><li>Scala version 2.12.6 <a href="https://www.scala-lang.org/?ref=honstain.com">https://www.scala-lang.org/</a></li><li>PostgreSQL 10.6 <code>PostgreSQL 10.6 (Ubuntu 10.6-0ubuntu0.18.10.1) on x86_64-pc-linux-gnu</code> installed locally on Ubuntu 18.10 using <code>sudo apt install postgresql</code> <a href="https://www.postgresql.org/?ref=honstain.com">https://www.postgresql.org/</a></li><li>Ubuntu 18.10 <a href="http://releases.ubuntu.com/18.10/?ref=honstain.com">http://releases.ubuntu.com/18.10/</a> (I tend to run this in VMware Workstation 15 Player for convenience)</li></ul><p>Assumptions:</p><ul><li>You already have a basic Scalatra service started.</li></ul><h2 id="details">Details</h2><p>You can start with the official Scalatra guide for integrating with a persistence framework and the manual for Slick:</p><ul><li><a href="http://scalatra.org//guides/2.6/persistence/introduction.html?ref=honstain.com">http://scalatra.org//guides/2.6/persistence/introduction.html</a></li><li><a href="http://scalatra.org/guides/2.6/persistence/slick.html?ref=honstain.com">http://scalatra.org/guides/2.6/persistence/slick.html</a></li></ul><p>I wanted to start with Slick based on the positive things I had heard about it from my peers more knowledgeable in Scala. However I wanted to start with PostgreSQL instead of H2 (as I was interested in being able to do some naive bench-marking and eventually run the service in Heroku).</p><h3 id="getting-your-database-running">Getting Your Database Running</h3><p>If you prefer to run Docker or have an alternative approach, feel free to skip this section.</p><p>This may not be the right path for you, this is optimized for local development and direct administration of the DB. It is <strong>NOT SECURE</strong>, and <strong>NOT MEANT FOR PRODUCTION</strong>.</p><ul><li>Install PostgreSQL - I have opted to use the Ubuntu package <code>sudo apt install postgresql</code>. I found this resource helpful (DigitalOcean produces some very helpful guides) <a href="https://www.digitalocean.com/community/tutorials/how-to-install-and-use-postgresql-on-ubuntu-18-04?ref=honstain.com">https://www.digitalocean.com/community/tutorials/how-to-install-and-use-postgresql-on-ubuntu-18-04</a></li><li>Create a user and a new DB</li></ul><!--kg-card-begin: markdown--><pre><code class="language-bash">sudo -i -u postgres
&gt; createuser --interactive
&gt; createdb toyinventory
</code></pre>
<!--kg-card-end: markdown--><ul><li>Allow access to the local database via trust authentication (I am using this because I have a single-user workstation for development - PostgreSQL assumes anyone who can connect is authorized with whatever user they want). Some additional references if your interested <a href="https://www.postgresql.org/docs/9.1/auth-pg-hba-conf.html?ref=honstain.com">https://www.postgresql.org/docs/9.1/auth-pg-hba-conf.html</a> and <a href="https://www.postgresql.org/docs/9.1/auth-methods.html?ref=honstain.com#AUTH-TRUST">https://www.postgresql.org/docs/9.1/auth-methods.html#AUTH-TRUST</a></li><li>Use the editor of your choice to open your <code>pg_hba.conf</code> file. <code>sudo emacs /etc/postgresql/10/main/pg_hba.conf</code> and set the IPv4 and IPv6 to trust.</li></ul><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2019/02/image-10.png" class="kg-image" alt="Scalatra 2.6.4 with Slick and PostgreSQL" loading="lazy"></figure><ul><li>UPDATE TO GUIDE - I also ended up setting &quot;local&quot; to <code>trust</code> also so that I could easily get assess with psql from the command line</li><li>Restart PostgreSQL <code>sudo service postgresql restart</code></li></ul><!--kg-card-begin: markdown--><p><s>You can connect and interact with the database via psql using the account you previously created. sudo -i -u toyinventory psql.</s></p>
<!--kg-card-end: markdown--><p>You can connect and interact with the database via psql using the account you previously created. <code>psql -U toyinventory</code>. This can be a helpful guide if your PSQL is rusty <a href="http://postgresguide.com/utilities/psql.html?ref=honstain.com">http://postgresguide.com/utilities/psql.html</a></p><p>You can then validate by accessing the database using IntelliJ, as I frequently prefer to execute queries and inspect the database from IntelliJ.</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2019/02/image-11.png" class="kg-image" alt="Scalatra 2.6.4 with Slick and PostgreSQL" loading="lazy"></figure><h3 id="creating-table-and-records">Creating Table and Records</h3><p>You will want a simple table and some records to start working with, for this guide we will avoid going into schema management. First, create your table:</p><!--kg-card-begin: markdown--><pre><code class="language-sql">CREATE TABLE inventory
(
  id bigserial NOT NULL,
  sku text,
  qty integer, -- https://www.postgresql.org/docs/10/datatype-numeric.html
  description text,
  CONSTRAINT pk PRIMARY KEY (id)
)
</code></pre>
<!--kg-card-end: markdown--><p>A little off topic, but some interesting references if you are considering what column type to use for strings with PostgreSQL <a href="https://stackoverflow.com/questions/4848964/postgresql-difference-between-text-and-varchar-character-varying?ref=honstain.com">https://stackoverflow.com/questions/4848964/postgresql-difference-between-text-and-varchar-character-varying</a> and <a href="https://www.depesz.com/2010/03/02/charx-vs-varcharx-vs-varchar-vs-text/?ref=honstain.com">https://www.depesz.com/2010/03/02/charx-vs-varcharx-vs-varchar-vs-text/</a></p><p>Insert some data to work with:</p><!--kg-card-begin: markdown--><pre><code class="language-sql">INSERT INTO inventory(sku, qty, description) VALUES 
(&apos;ZL101&apos;, 1, &apos;Black shoes&apos;), 
(&apos;ZL102&apos;, 0, &apos;Red dress&apos;), 
(&apos;ZL103&apos;, 4, &apos;Block of wood&apos;);
</code></pre>
<!--kg-card-end: markdown--><h3 id="adding-the-necessary-scalatra-dependencies">Adding the Necessary Scalatra Dependencies</h3><p>Now that you have a running database, let&apos;s get Scalatra to talk to it.</p><p>If you started with the Slick documentation <a href="http://slick.lightbend.com/doc/3.3.0/gettingstarted.html?ref=honstain.com">http://slick.lightbend.com/doc/3.3.0/gettingstarted.html</a> you found it also used the H2 database. I found it difficult to follow (but that&apos;s probably my weakness more than anything).</p><p>I added the following dependencies to my projects <code>build.sbt</code> </p><!--kg-card-begin: markdown--><pre><code class="language-scala">libraryDependencies ++= Seq(
  &quot;com.typesafe.slick&quot; %% &quot;slick&quot; % &quot;3.3.0&quot;,
  &quot;org.postgresql&quot; % &quot;postgresql&quot; % &quot;42.2.5&quot;, // org.postgresql.ds.PGSimpleDataSource dependency
)
</code></pre>
<!--kg-card-end: markdown--><ul><li>Slick version 3.3.0 <a href="http://slick.lightbend.com/doc/3.3.0/?ref=honstain.com">http://slick.lightbend.com/doc/3.3.0/</a></li><li>PostgreSQL JDBC <a href="https://github.com/pgjdbc/pgjdbc?ref=honstain.com">https://github.com/pgjdbc/pgjdbc</a> If you went through the Slick documentation you will see that <a href="http://slick.lightbend.com/doc/3.3.0/database.html?ref=honstain.com">http://slick.lightbend.com/doc/3.3.0/database.html</a> recommended version <code>9.4-1206-jdbc42</code> and I have opted to use the most recent version of <code>42.2.5</code>.</li></ul><h3 id="connecting-to-the-database">Connecting to the Database</h3><p>I want to see the code talk to the DB, so I ignored proper management of connections and dependencies to get things started. </p><p>I added the following imports to my Servlet that is responsible for the REST endpoints. NOTE - this was where I got tripped up trying to follow the other guides, there are a number of ways to use this library, and if you let IntelliJ handle the auto-import it is very likely that end up with a confusing mess.</p><!--kg-card-begin: markdown--><pre><code class="language-scala">import slick.jdbc.PostgresProfile.api._

import scala.concurrent.{Await, Future}
import scala.concurrent.duration.Duration
</code></pre>
<!--kg-card-end: markdown--><p>Now specific the code needed to make the database connection, I have opted to retrieve the postgres user and password from environment variables as opposed to checking them in directly to source code.</p><!--kg-card-begin: markdown--><pre><code class="language-scala">val postgres_user = sys.env(&quot;postgres_user&quot;)
val postgres_password = sys.env(&quot;postgres_password&quot;)
val connectionUrl = s&quot;jdbc:postgresql://localhost:5432/toyinventory?user=${postgres_user}&amp;password=${postgres_password}&quot;
</code></pre>
<!--kg-card-end: markdown--><p>Now specific a class to model your database record.</p><!--kg-card-begin: markdown--><pre><code class="language-scala">class InventoryRecord(tag: Tag) extends
  Table[(Int, String, Int, String)](tag, &quot;inventory&quot;) {

  def id = column[Int](&quot;id&quot;)
  def sku = column[String](&quot;sku&quot;)
  def qty = column[Int](&quot;qty&quot;)
  def description = column[String](&quot;description&quot;)

  def * = (id, sku, qty, description)
}
</code></pre>
<!--kg-card-end: markdown--><p>Now your ready to run a SELECT all query using Slick</p><!--kg-card-begin: markdown--><pre><code class="language-scala">val db = Database.forURL(connectionUrl, driver = &quot;org.postgresql.Driver&quot;)

try {
  val users = TableQuery[InventoryRecord]
  val query = users.map(_.sku)
  val action = query.result
  val result: Future[Seq[String]] = db.run(action)
  val futureResult = Await.result(result, Duration.Inf)
  futureResult.map { sku =&gt; logger.debug(s&quot;SKU: ${sku}&quot;) }
} finally db.close
</code></pre>
<!--kg-card-end: markdown--><p>Now that you have all of the pieces, I slammed that code into my GET endpoint that I defined in <a href="https://honstain.com/rest-in-a-scalatra-service/">http://honstain.com/rest-in-a-scalatra-service/</a> and sent some HTTP requests.</p><p>Don&apos;t forget to set your environment variables if you opt to run via IntelliJ </p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2019/02/image-18.png" class="kg-image" alt="Scalatra 2.6.4 with Slick and PostgreSQL" loading="lazy"></figure><!--kg-card-begin: markdown--><pre><code class="language-scala">import org.scalatra._
import org.slf4j.LoggerFactory
// JSON-related libraries
import org.json4s.{DefaultFormats, Formats}
// JSON handling support from Scalatra
import org.scalatra.json._

import slick.jdbc.PostgresProfile.api._

import scala.concurrent.{Await, Future}
import scala.concurrent.duration.Duration
import scala.concurrent.ExecutionContext.Implicits.global

class ToyInventory extends ScalatraServlet with JacksonJsonSupport {

  val logger = LoggerFactory.getLogger(getClass)
  protected implicit val jsonFormats: Formats = DefaultFormats

  before() {
    contentType = formats(&quot;json&quot;)
  }

  get(&quot;/&quot;) {
    val postgres_user = sys.env(&quot;postgres_user&quot;)
    val postgres_password = sys.env(&quot;postgres_password&quot;)
    val connectionUrl = s&quot;jdbc:postgresql://localhost:5432/toyinventory?user=${postgres_user}&amp;password=${postgres_password}&quot;

    val db = Database.forURL(connectionUrl, driver = &quot;org.postgresql.Driver&quot;)

    try {
      val users = TableQuery[InventoryRecord]
      val query = users.map(_.sku)
      val action = query.result
      val result: Future[Seq[String]] = db.run(action)
      val futureResult = Await.result(result, Duration.Inf)
      futureResult.map { sku =&gt; logger.debug(s&quot;SKU: ${sku}&quot;) }
    } finally db.close

    InventoryData.all
  }

  post(&quot;/&quot;) {
    val newInventory = parsedBody.extract[Inventory]
    logger.debug(s&quot;Creating inventory sku:${newInventory.sku}&quot;)
    logger.debug(&quot;Creating inventory {}&quot;, newInventory.toString)
    InventoryData.all = newInventory :: InventoryData.all
    newInventory
  }

}

case class Inventory(sku: String, qty: Int, description: String)

object InventoryData {

  var all = List(
    Inventory(&quot;ZL101&quot;, 1, &quot;Black shoes&quot;),
    Inventory(&quot;ZL102&quot;, 0, &quot;Red dress&quot;),
    Inventory(&quot;ZL103&quot;, 4, &quot;Block of wood&quot;),
  )
}

class InventoryRecord(tag: Tag) extends
  Table[(Int, String, Int, String)](tag, &quot;inventory&quot;) {

  def id = column[Int](&quot;id&quot;)
  def sku = column[String](&quot;sku&quot;)
  def qty = column[Int](&quot;qty&quot;)
  def description = column[String](&quot;description&quot;)

  def * = (id, sku, qty, description)
}
</code></pre>
<!--kg-card-end: markdown--><p>The result of this generated the following logs, Slick generated a significant amount of logging that I thought was very detailed (but would probably immediately trim down).</p><figure class="kg-card kg-image-card"><img src="https://honstain.com/content/images/2019/02/image-14.png" class="kg-image" alt="Scalatra 2.6.4 with Slick and PostgreSQL" loading="lazy"></figure><h2 id="summary">Summary</h2><p>Now that you have a basic query working, you can start writing more advanced queries. You will also want to start managing your database initialization and connections more appropriately.</p><h3 id="references-i-found-helpful">References I Found Helpful</h3><ul><li>Slick Documentation <a href="http://scalatra.org/guides/2.6/persistence/slick.html?ref=honstain.com">http://scalatra.org/guides/2.6/persistence/slick.html</a></li><li>Slick Queries <a href="http://slick.lightbend.com/doc/3.3.0/queries.html?ref=honstain.com">http://slick.lightbend.com/doc/3.3.0/queries.html</a></li><li>PSQL cheat sheet <a href="http://postgresguide.com/utilities/psql.html?ref=honstain.com">http://postgresguide.com/utilities/psql.html</a></li><li>This guide helped me by demonstrating some basic postgres queries from Slick <a href="http://queirozf.com/entries/scala-slick-simple-example-on-connecting-to-a-postgresql-database?ref=honstain.com">http://queirozf.com/entries/scala-slick-simple-example-on-connecting-to-a-postgresql-database</a></li></ul>]]></content:encoded></item></channel></rss>