(to the tune of the Ewoks Victory Song. Now it can be stuck in your head, and not just mine)
Picking up from last time, I needed to start creating things inside my Virtual Private Cloud, or VPC. The first things to create are subnets – public subnets, in particular. Without a public subnet, nothing that I run in the VPC can be accessed from the internet – nor can they access the internet in turn.
As previously discussed, I’m doing a small series of posts around bringing the AWS infrastructure that I use into the current era, and putting it all into CloudFormation. In this post, I’m going to cover setting up the first stack. This is going to set up a Virtual Private Cloud (or VPC), which is where the rest of the stuff I make later will sit.
What’s a Virtual Private Cloud?
A VPC is a virtual network of virtual servers. It’s your own mini-slice of the AWS cloud, and the machines within the VPC are aware of each other – in fact, they are on their own subnet (or subnets).
Why use a VPC?
You don’t have to set up a VPC to use AWS. You can simply create servers. That’s what I’ve been doing up until now. It’s just that it’s a bit limited.
I want to use a VPC for two big reasons:
I want to be able to use more recent/powerful/cheaper machine images, with OpsWorks. They’re only available if I also use a VPC.
I want to use an Elastic Load Balancer, in part to manage HTTPS certificates and connections. This requires a VPC and subnets.
Setting up the Stack
Here’s my config, at this particular stage:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
I put the VPC into its own file, because I don’t want to delete it when tearing down an environment for the sake of testing. There’s lots of things that get annoying to re-create if the VPC is changed (cough OpsWorks stacks) – so we put the VPC in its own file. (Later, when I bring in nested stacks, this will not be in the nested stack).
Break It Down
First, note that I use YAML for my CloudFormation files. I’m not a huge fan of YAML in general, but the JSON option doesn’t allow you to use comments, and comments are essential. (Sidebar: when parsing JSON, always enable comments. It’s non-standard, but it’s useful). Using YAML also lets me use a more convenient shorthand for accessing some inbuilt functions (the Sub one is used here). I strongly suggest you do the same.
The Parameters block provides me with some configuration options. It allows me, if I want, to create different instances of the stack. In this case, I use an Environment parameter. This particular parameter is common to all my stack files, and I use it to separate test stacks from the prod ones. (I could also do this with AWS sub-accounts)
In my previous post, I had demonstrated how to configure a Jenkins Server using Docker. The next step is to create a Jenkins job to build some software. Now, we could just do a simple freestyle job, or a basic Maven build – but that will require configuration of Jenkins every time we want to make a new project, and that makes managing the Jenkins Server via Docker more annoying. So, instead, I’m going to use the Cloudbees Bitbucket Branch Source Plugin and create a Bitbucket Team/Project job that will create the rest of my Jenkins jobs automagically for me. A similar plugin exists for GitHub, though I haven’t looked into it.
With the upcoming end-of-life for Bamboo Cloud, I’m in the market for a new build server setup. For this1 experiment, I’m returning to an old favourite – Jenkins – paired with a potential new favourite – Docker. In this post, I describe how I’ve set up a Jenkins server in a Docker container, using the Multibranch Pipeline plugin to automatically configure a simple build2.
Well, it’s been three-and-a-half years, but I’ve finally got around to getting to a point of writing an iOS app. I wouldn’t hold your breath waiting to get a copy, though – it’s purely for my private use, to aid in monitoring and administering the IES project.
After doing enough tutorials and similar exercises to be comfortable in building the app and the UI, I got around to trying calls to the AWS infrastructure. This proved a bit more difficult than I anticipated – hence this aide-mémoire. This isn’t going to be useful for non-iOS developers, and I doubt it’s going to have anything new for more seasoned iOS developer; only iOS noobs like me need bother.
Having succeeded in getting a simple JBehave story running. my next challenge is to scale it up a bit. In particular, I want to get a JBehave story that integrates with Spring to do something more fully-featured: save an entry in a database.
In the last segment, I managed to get JBehave reporting under Maven using a pre-canned example. This time, I want to tackle the other extreme – I want to develop a single story in JBehave and see what’s the bare minimum it takes to get it running, inside an IDE (in my case, Eclipse)/
JBehave comes with some very comprehensive examples, so I thought I’d start there to see if I could get one of them building – and reporting – under Maven. The example I chose was the ‘trader‘ example, which you can see at github.
A couple of years ago, I got the ‘specification-by-example’ bug. I was playing with this new project called Cucumber, and was really enjoying the idea of specifying examples in an English-like syntax that testers and BAs could supposedly read. (I say supposedly, because it never really took on at my previous place-of-employment. Which was a shame). Nonetheless, I enjoyed it, and advocated it when and where I could. Heck, if nothing else, it was an excuse to write support code in Ruby as a break from the Java stuff.