Starting out architecting a new application can be both one of the most challenging and the most exciting parts of creating a new service. The main reason for this is it's very infrequent that you start from a blank page.
If you work in a larger business, perhaps enterprise scale, there will already be existing infrastructure. You won't be creating a piece that is the full stack. Even at a smaller company, there will - read should - be some level of scaffolding and templating in place so that creating an MVP isn’t an empty canvas.
Starting your startup on the other hand means the opportunities are endless.
Building BetaBud meant first validating the idea. Often folks do this with a simple landing page and waitlist, however being an engineer I wanted something tangible. Something that users could use from the get-go. Strictly a Minimal Viable Product (MVP).
This series of articles will explore the stages that BetaBud went through to achieve the MVP. It is going to be rather specific to the site itself. Numerous different options and approaches could be taken, however, this was the path chosen for BetaBud.
What a startup often lacks is both expendable time and resources. This is why the Cloud has become such a positive innovation for small businesses. Allowing them to reduce any maintenance overhead from on-premises servers or even rented space in a warehouse. Instead AWS, Azure, and GCP amongst others offer guarantees that can’t be matched.
Therefore, I opted to use AWS. It is the market leader, I have vast amounts of previous experience with their products, and it has a generous Getting Started free tier.
I don’t think AWS needs any more marketing.
Another big reason for using AWS is the fact that Localstack allows for rapid development. Localstack is an open-source, containerised version of AWS. There is a generous free tier - perfect for startups - as well as a paid tier that includes additional products in the AWS suite.
Many large companies use and trust Localstack. It allows for rapid local development, without increasing that inevitable AWS spend. I can’t recommend it enough! Some simple Service Url and Region changes and its almost parity with using the real McCoy.
Spin up a simple Docker container with the different services you desire, alongside some seed scripts to create tables, streams et al. It is a great playground, rather than requiring you to prematurely use an AWS product.
As BetaBud is primarily a website, it follows the conventional pattern for web structure. A separated front and back end, alongside a data store. As this is an MVP, this has been created as two monoliths for speed of development, however with the scope of allowing further separation along the way.
Splitting the back end from the front end and exposing it as an API makes life easier should the UI take a different form later down the line. For instance: a mobile app, or an installable desktop application.
This diagram demonstrates the different black box components of BetaBud.
As with all software, I believe how the data is structured should be the key consideration. Consider the different query and write patterns that are necessary for the application. This will help inform the type of database you choose, be it NoSQL, SQL or a hybrid of the two - unlikely for a first-point MVP.
Initially, BetaBud uses exclusively AWS DynamoDB. This low-latency data store offers all of the required querying required for the initial MVP. Adhering to the AWS recommendation of single-table design for data that is highly related.
When developing locally, I spin up a DynamoDB table in Localstack, alongside the dynamodb-admin Docker image. A simple UI means I can interact with the local instance of DynamoDB all within the browser.
Additionally, the DynamoDB streams functionality will offer great flexibility for future feature sets, such as metrics and notifications.
The API is a Dotnet Web API, harnessing the use of Amazon.Lambda.AspNetCoreServer. What a great package! My main motivation for using Dotnet is familiarity. I really cannot speak highly enough about Dotnet and its current trajectory. Some reasons for choosing Dotnet are type safety, tooling, and performance.
It has a reputation as being for big corporations. Which I see as a stamp of quality and stability, not some hipster insult.
Currently, this API is a small, little monolith, containing multiple endpoints. It is set up to be extended and then eventually split up into several different microservices.
This is then deployed to AWS Lambda.
This API sits behind an AWS API Gateway. Some of the motivations for this API gateway are:
As aforementioned I wanted to keep the front and back end entirely separate. This allows for extendability further down the line should the user interface take on another form. Another reason for doing so is that it becomes easier to perform more isolated testing.
The current front end is written in Next.js and is presently deployed to Vercel. My affinity for Next.js isn’t the same as Dotnet. I have used many different front-end frameworks previously, and this world is forever changing. I could have used Vue, Angular, Blazor, standalone React, or even vanilla JS - it is hard to keep up.
Yet, I chose Next.js as it is great for rapid development. The jury is still out though.
Alongside Next.js I have used the Material UI component library. I like this library as it is well supported and breaking changes between versions is kept to a minimum. Something of a rarity in the JS world it seems.
This Front end is then proxied through to the API Gateway, using JWT on the REST requests.
AWS Cognito handles the authorization and authentication for the application. Currently, we’re using the Hosted UI, which leaves a lot to be desired. A lot of features are missing, yet, it is secure and by definition, the MVP wasn’t necessarily the highest of priorities as most of my complaints are superficial.
The front end then harnesses Auth.js - now Next-Auth - to implement the handling of the session and JWT. Whenever a request is made to the API, the JWT is retrieved and passed in the Authorization header.
Both the API Gateway and the Web API will then interrogate the request ensuring whether the JWT is valid for both authentication and authorisation.
When developing locally, rather than using Localstack here for AWS Cognito, I talk to AWS itself. Even though Localstack does offer the service, I use a clone of the production Cognito. The main motivation for this is that I am using the Hosted UI, and it keeps the setup simple and my local environment as like-for-like as possible.
When I move away from the Hosted UI, I will perhaps reconsider this setup.
Continuous Integration and Continuous Delivery are beautiful practices that have helped speed up project development vastly in recent years. Establishing these practices at the startup level can sometimes feel like the bottom of the pile.
For the front end, I am utilizing Vercel, therefore they take care of the most part here. Merge into master, they will create a production build, run tests, and then update the latest artefact. This developer experience is the reason Vercel has become so popular. No need to create pipelines or maintain any build infrastructure yourself.
On the back end, this setup is currently a little haphazard, as can be expected when starting. In an ideal world, with infinite time I would use Terraform. However, I didn’t have the patience for setting up a build agent, pipelines, stacks…
You can even set up Localstack with Terraform now.
Although I wanted to move fast, I certainly didn’t want to rely on ClickOps - This is where everything is configured directly within a console using the UI. Granted UIs are great, but this is greatly error-prone, doesn’t hold well for disaster recovery, and doesn’t allow for easily spinning up different environments.
Therefore I opted to use Cloudformation. It handles the creation of my AWS resources, including IAM policies. Remember to always use the Principle of Least Privilege.
I then deploy using the AWS Toolkit, directly from my local machine. This handles analysing the current stack and creating the change set, then applying the changes creating, destroying or modifying the resources. The Lambda will also be packaged as a zip, and the lambda function will be updated.
No, it doesn’t scale well, it's not a perfect Github Action that triggers on merge. However, it is all about priorities, particularly at the beginning. What is going to add more value, a new feature, this post outreach or saving time on deployment for a single engineer?