6 Common Misconceptions Around Akka-HTTP / Pekko-HTTP

8 10 min read Scala
Paweł Kiersznowski

Paweł Kiersznowski

Scala Developer

akka-http is the foundation of many Scala and Java web services that have been successfully running on production for quite some time now. It’s powered by Akka, a concurrency toolkit that had a big impact on Scala’s hype taking off.

1. akka-http is dead due to license changes

Lightbend has recently moved away from an open-source license which sparked a little bit of controversy. It made many developers believe that Akka and dependent libraries will fade away because no one would be willing to make a community-made fork. I had a lot of doubts about this scenario looking at how widespread Akka still is in many companies. An OSS alternative would appear sooner or later.

Not too long after Lightbend’s decision, The Apache Software Foundation decided to submit Pekko into the Apache Incubator. It’s great news for the Akka ecosystem, and as of today there are no signs of any maintenance slowing down – Pekko is being worked on a regular basis.

commit frequency Akka-HTTP Pekko HTTP

As long-time akka-http fans, Iterators contribute to pekko-http too. You can also check out our Scala 3 example of a pekko-http microservice!

2. The routing is unreadable

This is pretty much the most popular argument against akka-http. Most people provide this sort of code example to back this argument:

    pathPrefix(organizationsPath) {
      path(JavaUUID.as[OrganizationId] / roadmapsPath) { organizationId =>
        pathEndOrSingleSlash {
          put {
            entity(as[RoadmapCreateRequest]) { request =>
              complete {
                roadmapCreateService.create(auth, organizationId, request).map[ToResponseMarshallable] {
                  case RoadmapCreateResult.Created(roadmap)       => StatusCodes.Created -> roadmap
                  case RoadmapCreateResult.NoAccessToOrganization => StatusCodes.Forbidden
                  case RoadmapCreateResult.OrganizationNotFound   => StatusCodes.NotFound
                }
              }
            }
          }
        } ~
          get {
            pathEndOrSingleSlash {
              pagination { pagination =>
                parameters("id".as[RoadmapId].?) { id =>
                  val filters = RoadmapFilters(id = id, pageSizeAndNumber = pagination)
                  complete {
                    roadmapListService.list(auth, organizationId, filters).map[ToResponseMarshallable] {
                      case RoadmapListResult.Ok(roadmapListDtos)    => StatusCodes.OK -> roadmapListDtos
                      case RoadmapListResult.NoAccessToOrganization => StatusCodes.Forbidden
                      case RoadmapListResult.OrganizationNotFound   => StatusCodes.NotFound
                    }
                  }
                }
              }
            }
          }
        }
      }
    [...]

A nesting hell – not that easy to follow, the amount of closing braces is overwhelming, and it’s even harder to add more routes.

But the DSL is way more flexible than that. Example:

    pathPrefix(organizationsPath / JavaUUID.as[OrganizationId] / roadmapsPath) { organizationId =>
      (post & entity(as[RoadmapCreateRequest]) & pathEndOrSingleSlash) { request =>
        complete {
          roadmapCreateService.create(auth, organizationId, request).map[ToResponseMarshallable] {
            case RoadmapCreateResult.Created(roadmap)       => StatusCodes.Created -> roadmap
            case RoadmapCreateResult.NoAccessToOrganization => StatusCodes.Forbidden
            case RoadmapCreateResult.OrganizationNotFound   => StatusCodes.NotFound
          }
        }
      } ~ (get & pathEndOrSingleSlash & pagination & parameters("id".as[RoadmapId].?)) { (pagination, id) =>
        val filters = RoadmapFilters(id = id, pageSizeAndNumber = pagination)
        complete {
          roadmapListService.list(auth, organizationId, filters).map[ToResponseMarshallable] {
            case RoadmapListResult.Ok(roadmapListDtos)    => StatusCodes.OK -> roadmapListDtos
            case RoadmapListResult.NoAccessToOrganization => StatusCodes.Forbidden
            case RoadmapListResult.OrganizationNotFound   => StatusCodes.NotFound
          }
        }
      }
        [...]

I think the main reason behind this misconception is that the documentation lacks good examples of how to write the routing code properly. As you spend more time exploring the DSL, you find out that the routing doesn’t have to be that complicated to write. Using operators like ~, |, or & can significantly reduce nesting. You can also override the complete directive to get rid of additional nesting and explicitly adjusting the type to ToResponseMarshallable:

trait CompleteDirectives {
  def complete[T](fut: => Future[T])(map: T => ToResponseMarshallable)(implicit ec: ExecutionContext): StandardRoute =
    akkaComplete(fut.map(map))
}
object CompleteDirectives extends CompleteDirectives

Extending this trait in your router allows for defining the routes in the following way:

    pathPrefix(organizationsPath / JavaUUID.as[OrganizationId] / roadmapsPath) { organizationId =>
      (post & entity(as[RoadmapCreateRequest]) & pathEndOrSingleSlash) { request =>
        complete(roadmapCreateService.create(auth, organizationId, request)) {
          case RoadmapCreateResult.Created(roadmap)       => StatusCodes.Created -> roadmap
          case RoadmapCreateResult.NoAccessToOrganization => StatusCodes.Forbidden
          case RoadmapCreateResult.OrganizationNotFound   => StatusCodes.NotFound
        }
      } ~ (get & pathEndOrSingleSlash & pagination & parameters("id".as[RoadmapId].?)) { (pagination, id) =>
        val filters = RoadmapFilters(id = id, pageSizeAndNumber = pagination)
        complete(roadmapListService.list(auth, organizationId, filters)) {
          case RoadmapListResult.Ok(roadmapListDtos)    => StatusCodes.OK -> roadmapListDtos
          case RoadmapListResult.NoAccessToOrganization => StatusCodes.Forbidden
          case RoadmapListResult.OrganizationNotFound   => StatusCodes.NotFound
        }
      }
      [...]

Keeping the business logic out of routing helps tremendously too – but that goes for any HTTP client you use.

Nevertheless, what you want to do here is to make the routing concise and have it located only in a single place. For example, Play Framework takes a more ‘distributed approach’ where you define the routes in two places – the routes file and in your controller:

// file: authentications.routes

POST          /api/v1/session                                        @authentications.controllers.AuthenticationControllerImpl.login
// file: authentications.controllers.AuthenticationControllerImpl
  def login: Action[AnyContent] = action.async { implicit request =>
      deserialize[LoginRequest] { loginRequest =>
    [...]

In my opinion, this only introduces more code to worry about. You also can’t gather enough information about the endpoint by looking only at a single file – how do I tell what HTTP method the endpoint is using by looking at the login definition in AuthenticationControllerImpl? If I’m looking at the routes file – how do I tell if that POST endpoint requires a body? It requires switching between two files and I think it’s unnecessary.

Directives also have another benefit – they are declarative. This allows us to stick to ‘what I want done’ than ‘how I want it done’ mindset. Let’s take a look at this implementation of an endpoint protected with a secret key:

import akka.http.scaladsl.model.StatusCodes._
import akka.http.scaladsl.server.Route
import pl.iterators.sample.healthcheck.services.HealthService
import pl.iterators.sample.healthcheck.services.HealthService.HealthCheckResult
import pl.iterators.sample.utils.executioncontext.ExecutionContextDomain.RoutersExecutionContext
import pl.iterators.sample.utils.http.{BaseRouter, SecretKeyConfig, SecretKeyDirective}

import scala.concurrent.{ExecutionContext, Future}

class HealthCheckRouter(healthService: HealthService[Future], secretKeyConfig: SecretKeyConfig)(ec: RoutersExecutionContext) extends BaseRouter {
  implicit val e: ExecutionContext = ec

  import HealthCheckRouter._

  val routes: Route = pathPrefix(healthPath) {
    SecretKeyDirective.secretKeyProtection(secretKeyConfig) { _ =>
      (get & path(dbHealthPath) & pathEndOrSingleSlash) {
        complete(healthService.getDbHealth()) {
          case HealthCheckResult.Ok(details) => OK -> details
          case HealthCheckResult.Error       => ServiceUnavailable
        }
      }
    }
  }

}

object HealthCheckRouter {
  val healthPath   = "health-check"
  val dbHealthPath = "db"
}

All I have to do here is to pull in a config with the secret key and value and wrap the endpoint with the directive. It can be implemented in the following way:

import akka.Done
import akka.http.scaladsl.model.HttpHeader
import akka.http.scaladsl.server.directives.HeaderDirectives
import com.typesafe.config.Config
import akka.http.scaladsl.server.Directive1

trait SecretKeyDirective extends HeaderDirectives {
  
  private def extractSecretKeyHeader(name: String, value: String): HttpHeader => Option[Done] = {
    case HttpHeader(`name`, `value`) =>
      Some(Done)
    case HttpHeader(_, _) =>
      None
  }

  def secretKeyProtection(config: SecretKeyConfig): Directive1[Done] =
    headerValue(extractSecretKeyHeader(config.name, config.value))

}

object SecretKeyDirective extends SecretKeyDirective

case class SecretKeyConfig(name: String, value: String)

object SecretKeyConfig {
  def apply(config: Config): SecretKeyConfig =
    SecretKeyConfig(name = config.getString("app.secretKey.name"), value = config.getString("app.secretKey.value"))
}

The majority of work here is already done in akka-http’s HeaderDirectives. Achieving the same thing in http4s is possible, but it will require a little bit of work from your side, since by default its DSL takes a different approach to working with headers.

3. The internals are not purely functional, therefore it’s unsafe

Pure FP fans in the Scala community aren’t convinced with akka-http mainly due to the imperative style in which it was implemented. The Akka ecosystem isn’t exactly known for pushing the Scala compiler to its limits by leveraging type safety, it’s true. Although, while I found the lack of types in Akka actors a little bit frustrating, I haven’t found any similar design flaw in akka-http that would have a serious impact on my work. It has a huge user base that contributed to its stability over the years – at this point it’d be hard to mess something up greatly, considering that the library already has all the features it needs and the most important issues were already addressed in the past.

I always found the ‘everything I use has to be pure FP’ approach a little bit over the top – especially on the JVM, where the APIs of libraries usually are ‘functional’, but the internals are pretty much always written in an imperative style for performance reasons. I always find it a little bit ironic when hardcore FP fans use Scala Collections anyway – they better not look at what’s inside, they might get their heart broken.

My point here is to not disqualify akka-http due to it’s ‘Scava’ internals: they don’t really have a deal-breaking impact on your application. Your HTTP server is only a small part of your system. It’s better to ensure that Functional Programming is leveraged the most in your business logic – since that’s the most important part of our applications.

4. It’s tied to scala.concurrent.Future

Only if you’re not willing to add an inter-op layer for the effect you want to use! Integrating Cats Effect or FS2 with akka-http is absolutely possible. If you’re using akka-http’s WebSocket support but want to use FS2 instead of akka-streams, you can always use this piece of great library. Make sure you check out this invaluable blog post about integrating Cats Effect IO with Akka.

5. It’s too complicated to use

I think this one mainly stems from the first misconception discussed in this article. If you had your first experience with akka-http working on a bloated codebase, then I can’t blame you. Although, I don’t believe that akka-http is inherently complicated. Let’s look at the usual flow happening in HTTP libraries: processing a request and then returning a response. Here’s how the signatures are represented in akka-http and http4s:

akka-http: RequestContext => Future[Response]

  • akka-http: RequestContext => Future[Response]
  • http4s: Kleisli[OptionT[F, *], Request, Response]

There’s a popular approach to measuring the complexity of things in programming: show it to a beginner and ask them to explain what it does. Respectfully, I don’t think that most of them would have it easy with the second signature. Kleisli is a quite difficult concept to explain – mainly due to the Inversion of Control pattern that it entails. Not to mention, if it was obvious and easy to understand, then I don’t think it’d be necessary for the documentation to mention that you shouldn’t panic otherwise :-).

In my opinion, akka-http does fine in terms of simplicity. Our experience at Iterators shows that onboarded developers don’t find akka-http as the main obstacle in grasping the codebase. The Scala community is also full of supportive long-time akka-http users – you’ll always find somebody if you need some consulting.

6. It’s better off wrapped by another HTTP library

akka-http can serve as a backend for some frameworks (Play) or endpoint libraries (tapir, endpoints4s). It’s pretty common to hear that you should use one of these libraries instead of using akka-http directly. Personally, I’d think twice before doing that. You might lose the flexibility that akka-http offers with its DSL. If you take a look at the code snippet from the first paragraph:

        complete(roadmapCreateService.create(auth, organizationId, request)) {
          case RoadmapCreateResult.Created(roadmap)       => StatusCodes.Created -> roadmap
          case RoadmapCreateResult.NoAccessToOrganization => StatusCodes.Forbidden
          case RoadmapCreateResult.OrganizationNotFound   => StatusCodes.NotFound
        }

As you can see – we’re not using Either for error handling here. In Iterators, We’re obsessed with ADTs to the point we created our own monad that revolves around them. One of the main reasons why we’re not using Either is that we don’t want to distinguish errors and successes – some cases that we would be forced to put into a Left, depending on the context, can end up being interpreted as a success in other parts of the application. Sometimes it doesn’t even matter if it’s an error or not. That’s why we return ADT instead of Either[ADT, A] to let the caller make decisions about what to do next on his own by pattern matching on that ADT.

Some endpoint libraries are more restrictive and don’t allow us to take this approach. In tapir, for example, you’re expected to map your business logic to Either by using the def serverLogic[F[_]](f: I => F[Either[E, O]]) function. The status mapping is also done in another function, which brings even more interesting things to consider.

Let’s take a look at this example:

endpoint
.post
.in("auth" / "session")
.in(jsonBody[LoginRequest])
.out(oneOf(statusMapping(Created, jsonBody[LoginResponse.LoggedIn])))
.errorOut(oneOf(statusMapping(Unauthorized, jsonBody[JsonError]), statusMapping(BadRequest, jsonBody[JsonError])))
.serverLogic { request =>
authService.login(request).map {
case logged @ LoginResponse.LoggedIn(_,_) => Right(logged)
case LoginResponse.InvalidCredentials => Left(JsonError("Invalid credentials"))
case LoginResponse.UserNotFound => Left(JsonError("User not found"))
  }
}

As you can see, we’re returning the same case class for both InvalidCredentials and UserNotFound cases. Meanwhile, we want to map two different statuses for these cases – Unauthorized for InvalidCredentials and BadRequest for UserNotFound. Now tapir has a problem: it doesn’t know how to match the statuses accordingly because it doesn’t have enough information on how to do that – passing the jsonBody[JsonError] type is unfortunately not enough. Resolving this might take more effort than in the original akka-http’s DSL, where we can do the status mapping in a single line of code.

In another example – sometimes you want to do query parameter validation at the router level. Here’s how you can validate the request’s query parameters in tapir:

    endpoint.get
      .in("example")
      .in(query[Int]("amount").validate(Validator.min(0)).validate(Validator.max(100)))
      .in(query[Int]("score").validate(Validator.min(0)).validate(Validator.max(100)))

It’s readable and it works just fine – but only as long as you don’t want to accumulate errors and return them in the response. In this case, tapir will only return the first validation failure even if there are multiple validation errors. Meanwhile, akka-http allows you to achieve that by creating a directive that’ll take care of error accumulation.

This obviously doesn’t mean that tapir is a bad library. On the contrary, it’s a solid choice when you want to abstract over HTTP servers and have a very easy way to generate API documentation. The tradeoff is that you can lose the flexibility of the underlying server, which comes very useful when you’re doing non-standard things. If not – you’ll do just fine with tapir. There are no silver bullets – but you probably already know that 🙂

Conclusion

Over the years, akka-http has been frequently the center of heated discussions, which created a lot of interesting viewpoints in the Scala community. Some of them usually weren’t confronted with what akka-http is actually capable of, and they are the main focus of this article. I hope I provided some balance with my thoughts on this library.

8 Comments

you can actually combine akka directives with tapir, some of not so common use cases can be implemented this way

Just wanted to leave two notes about tapir, if you’ll allow 🙂

Firstly, while using the error channel + Eithers in server logic is indeed the default way, it’s also completely optional. You can map your ADT to a response description using oneOf(…) in regular outputs, and the use .serverLogicSuccess to specify the server logic. This requires a function I => F[O] – no eithers.

As for accumulating errors, that’s indeed how things currently work, but there’s nothing in the design prohibiting error accumulation. Just not implemented 🙂 You can vote for https://github.com/softwaremill/tapir/issues/2640 if you’d like to see this implemented. Maybe you can share some use-cases as well. For example, errors from which request fragments should take part in the accumulation?

Tapir does indeed have multiple options to handle the server logic – but I haven’t seen any that would let me have the possibility of not having to think which cases of my ADT are successes or failures – so using `serverLogicSuccess` doesn’t fit my coding style. As for `oneOf` – I couldn’t get it to work in the usecase mentioned in the article, maybe I’m doing something completely wrong.

accumulating query parameter validation errors is a good start – I gave a thumbs up in the GitHub issue 🙂

Hi, thanks for the article! But none of the points are convincing enough. It’s impossible to compare it to any of the Typelevel or Zio libraries in terms of quality/support/growth

it’s possible and Pekko is still up there with Typelevel if comparing the level of support – there are plenty of Akka veterans out there, the amount of projects running on Akka is still way higher than the amount of projects running on Cats or other Typelevel libraries. The number of active Akka contributors fluctuates pretty much the same as Typelevel contributors.
As for quality and growth – I don’t find Akka projects lacking quality or growth compared to other ecosystems – it’s still #1 if there’s a need for clustering in your project.

Leave a Reply