Rockford Lhotka's Blog

Home | Lhotka.net | CSLA .NET

 Tuesday, 02 October 2018

For many months (maybe years) I tried to use the Netflix app on my Surface Pro 3 and then Surface Pro 4.

The audio would always get out of sync with the video when streaming shows. Once Netflix allowed download of shows then it worked ok - but only with downloaded content.

This was true as recently as February 2018.

Searching the web revealed that other people had the issues too, and the concensus was something to do with the audio driver for the chipset used in Surface (and maybe other devices).

Recently I got a Surface Go, and to my surprise Netflix worked fine.

So I tried it again on my Surface Pro 4, and it works fine there too now.

I assume that either a Surface driver update or a Netflix app update (or both) occurred since February that finally resolved the issue with streaming and audio playback sync.

In any case, the news is good, and the Netflix app is working great on both my Surface Pro and Surface Go. This makes life in hotels a whole lot nicer, as the last thing I want is to be stuck with cable! 😃

Tuesday, 02 October 2018 16:12:24 (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, 25 September 2018

As people reading my blog know, I'm an advocate of container-based deployment of server software. The most commonly used container technology at the moment is Docker. And the most popular way to orchestrate or manage clusters of Docker containers is via Kubernetes (K8s).

Azure, AWS, and GCE all have managed K8s offerings, so if you can use the public cloud there's no good reason I can see for installing and mananaging your own K8s environment. It is far simpler to just use one of the pre-existing managed offerings.

But if you need to run K8s in your own datacenter then you'll need some way to get a cluster going. I thought I'd figure this out from the base up, installing everything. There are some pretty good docs out there, but as with all things, I ran into some sharp edges and bumps on the way, so I thought I'd write up my experience in the hopes that it helps someone else (or me, next time I need to do an install).

Note: I haven't been a system admin of any sort since 1992, when I was admin for a Novell file server, a Windows NT server, and a couple VAX computers. So if you are a system admin for Linux and you think I did dumb stuff, I probably did 😃

The environment I set up is this:

  1. 1 K8s control node
  2. 2 K8s worker nodes
  3. All running as Ubuntu server VMs in HyperV on a single host server in the Magenic data center

Install Ubuntu server

So step 1 was to install Ubuntu 64 bit server on three VMs on our HyperV server.

Make sure to install Ubuntu server, not client, otherwise this article has good instructions.

This is pretty straightforward, the only notes are:

  1. When the server install offers to pre-install stuff, DON'T (at least don't pre-install Docker)
  2. Make sure the IP addresses won't change over time - K8s doesn't like that
  3. Make sure the MAC addresses for the virtual network cards won't change over time - Ubuntu doesn't like that

Install Docker

The next step is to install Docker on all three nodes. There's a web page with instructions

⚠ Just make sure to read to the bottom, because most Linux install docs are seem to read like those elementary school trick tests where the last step in the quiz is to do nothing - you know, a PITA.

In this case, the catch is that there is a release version of Docker so don't use the test version. Here's the bash steps to get stable Docker for Ubuntu 18.04:

sudo apt install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"
sudo apt update
sudo apt install docker-ce

Later on the K8s instructions will say to install docker.io, but that's for Ubuntu desktop, while docker-ce is for Ubuntu server.

You will probably want to grant your current user id permission to interact with docker:

sudo usermod -aG docker $USER

Repeat this process on all three nodes.

Install kubeadm, kubectl, and kubelet

All three nodes need Docker and also the Kubernetes tools: kubeadm, kubectl, and kubelet.

There's a good instruction page on how to install the tools. Notes:

  1. Ignore the part about installing Docker, we already did that
  2. In fact, you can read-but-ignore all of the page except for the section titled Installing kubeadm, kubelet and kubectl. Only this one bit of bash is necessary:

Become root:

sudo su -

Then install the tools:

apt-get update && apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl

I have no doubt that all the other instructions are valuable if you don't follow the default path. But in my case I wanted a basic K8s cluster install, so I followed the default path - after a lot of extra reading related to the non-critical parts of the doc about optional features and advanced scenarios.

Install Kubernetes on the master node

One of my three nodes is the master node. By default this node doesn't run any worker containers, only containers necessary for K8s itself.

Again, there's a good instruction page on how to create a Kubernetes cluster. The first part of the doc describes the master node setup, followed by the worker node setup.

This doc is another example of read-to-the-bottom. I found it kind of confusing and had some false starts following these instructions: hence this blog post.

Select pod network

One key thing is that before you start you need to figure out the networking scheme you'll be using between your K8s nodes and pods: the pod network.

I've been unable to find a good comparison or explanation as to why someone might use any of the numerous options. In all my reading the one that came up most often is Flannel, so that's what I chose. Kind of arbitrary, but what are you going to do?

Once you've selected your pod network then you can proceed with the instructions to set up the master node.

Set up master node

Read the doc referenced earlier for more details, but here are the distilled steps to install the master K8s node with the Flannel pod network.

Kubernetes can't run if swap is enabled, so turn it off:

swapoff -a

This command only affects the current session. You also need to turn off swap after a reboot. To do this you need to edit the /etc/fstab file and comment out the lines regarding the swap file. For example, I've added a # to comment out these two lines in mine:

#UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx / ext4 defaults 0 0
#/swap.img	none	swap	sw	0	0

Now it is possible to initialize the cluster:

kubeadm init --pod-network-cidr=10.244.0.0/16

💡 The output of kubeadm init includes a lot of information. It is important that you take note of the kubeadm join statement that's part of the output, as we'll need that later.

Next make kubectl work for the root user:

export KUBECONFIG=/etc/kubernetes/admin.conf

Pass bridged IPv4 traffic to iptables (as per the Flannel requirements):

sysctl net.bridge.bridge-nf-call-iptables=1

Apply the Flannel v0.10.0 pod network configuration to the cluster:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/c5d10c8/Documentation/kube-flannel.yml

⚠ Apparently once v0.11.0 is released this URL will change.

Now the master node is set up, so you can test to see if it is working:

kubectl get pods --all-namespaces

Optionally allow workers to run on master

If you are ok with the security ramifications (such as in a dev environment), you might consider allowing the master node to run worker containers.

To do this run the following command on the master node (as root):

kubectl taint nodes --all node-role.kubernetes.io/master-

Configure kubectl for admin users

The last step in configuring the master node is to allow the use of kubectl if you aren't root, but are a cluster admin.

⚠ This step should only be followed for the K8s admin users. Regular users are different, and I'll cover that later in the blog post.

First, if you are still root, exit:

exit

Then the following bash is used to configure kubectl for the K8s admin user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

What this does is take the K8s keys that are in a secure location and copy them to the admin user's ~/.kube directory. That's where kubectl looks for the config file with the information necessary to run as cluster admin.

At this point the master node should be up and running.

Set up worker nodes

In my case I have 2 worker nodes. Earlier I talked about installing Docker and the K8s tools on each node, so all that work is done. All that remains is to join each worker node to the cluster controlled by the master node.

That 'kubeadm join' statement that was display as a result of 'kubeadm init' is the key here.

Log onto each worker node and run that bash command as root. Something like:

sudo kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>

That'll join the worker node to the cluster.

Once you've joined the worker nodes, back on the master node you can see if they are connected:

kubectl get nodes

That is it - the K8s cluster is now up and running.

Grant access to developers

The final important requirement is to allow developers access to the cluster.

Earlier I copied the admin.conf file for use by the cluster admin, but for regular users you need to create a different conf file. This is done with the following command on the master node:

sudo kubeadm alpha phase kubeconfig user --client-name user > ~/user.conf

The result is a user.conf file that provides non-admin access to the cluster. Users need to put that file in their own '/.kube/' directory with the file name ofconfig:~/.kube/config`.

If you plan to create user accounts going forward, you can put this file into the /etc/skel/ directory as a default for new users:

sudo mkdir /etc/skel/.kube
sudo cp ~/user.conf /etc/skel/.kube/config

As you create new users (on the master node server) they'll now already have the keys necessary to use kubectl to deploy their images.

Summary

There are a lot of options and variations on how to install Kubernetes using kubeadm. My intent with this blog post is to have a linear walkthrough of the process based as much as possible on defaults; the exception being my choice of Flannel as the pod network.

Of course the world is dynamic and things change over time, so we'll see how long this blog post remains valid into the future.

Tuesday, 25 September 2018 09:17:32 (Central Standard Time, UTC-06:00)  #    Disclaimer
 Monday, 17 September 2018

I'm not 100% sure of the cause here, but today I ran into an issue getting the latest Visual Studio 2017 to work with Docker for Windows.

My device does have the latest VS preview installed too, and I suspect that's the core of my issue, but I don't know for sure.

So here's the problem I encountered.

  1. Open VS 2017 15.8.4
  2. Create a new ASP.NET Core web project with Docker selected
  3. Press F5 to run
  4. The docker container gets built, but doesn't run

I tried a lot of stuff. Eventually I just ran the image from the command line

docker run -i 3247987a3

By using -i I got an interactive view into the container as it failed to launch.

The problem turns out to be that the container doesn't have Microsoft.AspNetCore.App 2.1.1, and apparently the newly-created project wants that version. The container only has 2.1.0 installed.

It was not possible to find any compatible framework version
The specified framework 'Microsoft.AspNetCore.App', version '2.1.1' was not found.
  - Check application dependencies and target a framework version installed at:
      /usr/share/dotnet/
  - Installing .NET Core prerequisites might help resolve this problem:
      http://go.microsoft.com/fwlink/?LinkID=798306&clcid=0x409
  - The .NET Core framework and SDK can be installed from:
      https://aka.ms/dotnet-download
  - The following versions are installed:
      2.1.0 at [/usr/share/dotnet/shared/Microsoft.AspNetCore.App]

The solution turns out to be to specify the version number in the csproj file.

    <PackageReference Include="Microsoft.AspNetCore.App" Version="2.1.0" />

Microsoft's recent guidance has been to not specify the version at all and their new project template reflects that guidance. Unfortunately there's something happening on my machine (I assume the VS preview) that makes things fail if the version is not explicitly marked as 2.1.0.

Monday, 17 September 2018 22:49:26 (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, 12 September 2018

I've recently become a bit addicted to Quora. It is probably because of their BNBR (be nice, be respectful) policy, so it isn't as nasty as Twitter and Facebook have become over the past couple years.

It also turns out that there are tech communities found on the site, and I've answered some questions recently. Stuff I probably would have (should have?) put on my blog, but wrote there instead.

Wednesday, 12 September 2018 15:13:35 (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, 30 August 2018

Software deployment has been a major problem for decades. On the client and the server.

On the client, the inability to deploy apps to devices without breaking other apps (or sometimes the client operating system (OS)) has pushed most business software development to relying entirely on the client's browser as a runtime. Or in some cases you may leverage the deployment models of per-platform "stores" from Apple, Google, or Microsoft.

On the server, all sorts of solutions have been attempted, including complex and costly server-side management/deployment software. Over the past many years the industry has mostly gravitated toward the use of virtual machines (VMs) to ease some of the pain, but the costly server-side management software remains critical.

At some point containers may revolutionize client deployment, but right now they are in the process of revolutionizing server deployment, and that's where I'll focus in the remainder of this post.

Fairly recently the concept of containers, most widely recognized with Docker, has gained rapid acceptance.

tl;dr

Containers offer numerous benefits over older IT models such as virtual machines. Containers integrate smoothly into DevOps; streamlining and stabilizing the move from source code to deployable assets. Containers also standardize the deployment and runtime model for applications and services in production (and test/staging). Containers are an enabling technology for microservice architecture and DevOps.

Virtual Machines to Containers

Containers are somewhat like virtual machines, except they are much lighter weight and thus offer major benefits. A VM virtualizes the hardware, allowing installation of the OS on "fake" hardware, and your software is installed and run on that OS. A container virtualizes the OS, allowing you to install and run your software on this "fake" OS.

In other words, containers virtualize at a higher level than VMs. This means that where a VM takes many seconds to literally boot up the OS, a container doesn't boot up at all, the OS is already there. It just loads and starts our application code. This takes fractions of a second.

Where a VM has a virtual hard drive that contains the entire OS, plus your application code, plus everything else the OS might possibly need, a container has an image file that contains your application code and any dependencies required by that app. As a result, the image files for a container are much smaller than a VM hard drive.

Container image files are stored in a repository so they can be easily managed and then downloaded to physical servers for execution. This is possible because they are so much smaller than a virtual hard drive, and the result is a much more flexible and powerful deployment model.

Containers vs PaaS/FaaS

Platform as a Service and Functions as a Service have become very popular ways to build and deploy software, especially in public clouds such as Microsoft Azure. Sometimes FaaS is also referred to as "serverless" computing, because your code only uses resources while running, and otherwise doesn't consume server resources; hence being "serverless".

The thing to keep in mind is that PaaS and FaaS are both really examples of container-based computing. Your cloud vendor creates a container that includes an OS and various other platform-level dependencies such as the .NET Framework, nodejs, Python, the JDK, etc. You install your code into that pre-built environment and it runs. This is true whether you are using PaaS to host a web site, or FaaS to host a function written in C#, JavaScript, or Java.

I always think of this as a spectrum. On one end are virtual machines, on the other is PaaS/FaaS, and in the middle are Docker containers.

VMs give you total control at the cost of you needing to manage everything. You are forced to manage machines at all levels, from OS updates and patches, to installation and management of platform dependencies like .NET and the JDK. Worse, there's no guarantee of consistency between instances of your VMs because each one is managed separately.

PaaS/FaaS give you essentially zero control. The vendor manages everything - you are forced to live within their runtime (container) model, upgrade when they say upgrade, and only use versions of the platform they currently support. You can't get ahead or fall behind the vendor.

Containers such as Docker give you some abstraction and some control. You get to pick a consistent base image and add in the dependencies your code requires. So there's consistency and maintainability that's far superior to a VM, but not as restrictive as PaaS/FaaS.

Another key aspect to keep in mind, is that PaaS/FaaS models are vendor specific. Containers are universally supported by all major cloud vendors, meaning that the code you host in your containers is entirely separated from anything specific to a given cloud vendor.

Containers and DevOps

DevOps has become the dominant way organizations think about the development, security, QA, deployment, and runtime monitoring of apps. When it comes to deployment, containers allow the image file to be the output of the build process.

With a VM model, the build process produces assets that must be then deployed into a VM. But with containers, the build process produces the actual image that will be loaded at runtime. No need to deploy the app or its dependencies, because they are already in the image itself.

This allows the DevOps pipeline to directly output a file, and that file is the unit of deployment!

No longer are IT professionals needed to deploy apps and dependencies onto the OS. Or even to configure the OS, because the app, dependencies, and configuration are all part of the DevOps process. In fact, all those definitions are source code, and so are subject to change tracking where you can see the history of all changes.

Servers and Orchestration

I'm not saying IT professionals aren't needed anymore. At the end of the day containers do run on actual servers, and those servers have their own OS plus the software to manage container execution. There are also some complexities around networking at the host OS and container levels. And there's the need to support load distribution, geographic distribution, failover, fault tolerance, and all the other things IT pros need to provide in any data center scenario.

With containers the industry is settling on a technology called Kubernetes (K8S) as the primary way to host and manage containers on servers.

Installing and configuring K8S is not trivial. You may choose to do your own K8S deployment in your data center, but increasingly organizations are choosing to rely on managed K8S services. Google, Microsoft, and Amazon all have managed Kubernetes offerings in their public clouds. If you can't use a public cloud, then you might consider using on-premises clouds such as Azure Stack or OpenStack, where you can also gain access to K8S without the need for manual installation and configuration.

Regardless of whether you use a managed public or private K8S cloud solution, or set up your own, the result of having K8S is that you have the tools to manage running container instances across multiple physical servers, and possibly geographic data centers.

Managed public and private clouds provide not only K8S, but also the hardware and managed host operating systems, meaning that your IT professionals can focus purely on managing network traffic, security, and other critical aspects. If you host your own K8S then your IT pro staff also own the management of hardware and the host OS on each server.

In any case, containers and K8S radically reduce the workload for IT pros in terms of managing the myriad VMs needed to host modern microservice-based apps, because those VMs are replaced by container images, managed via source code and the DevOps process.

Containers and Microservices

Microservice architecture is primarily about creating and running individual services that work together to provide rich functionality as an overall system.

A primary attribute (in my view the primary attribute) of services is that they are loosely coupled, sharing no dependencies between services. Each service should be deployed separately as well, allowing for indendent versioning of each service without needing to deploy any other services in the system.

Because containers are a self-contained unit of deployment, they are a great match for a service-based architecture. If we consider that each service is a stand-alone, atomic application that must be independently deployed, then it is easy to see how each service belongs in its own container image.

This approach means that each service, along with its dependencies, become a deployable unit that can be orchestrated via K8S.

Services that change rapidly can be deployed frequently. Services that change rarely can be deployed only when necessary. So you can easily envision services that deploy hourly, daily, or weekly, while other services will deploy once and remain stable and unchanged for months or years.

Conclusion

Clearly I am very positive about the potential of containers to benefit software development and deployment. I think this technology provides a nice compromise between virtual machines and PaaS, while providing a vendor-neutral model for hosting apps and services.

Thursday, 30 August 2018 12:47:24 (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, 23 August 2018

Git can be confusing, or at least intimidating. In particular, if you end up working on a project that relies on a pull request (PR) model, and even more so if forks are involved.

This is pretty common when working on GitHub open source projects. Rarely is anyone allowed to directly update the master branch of the primary repository (repo). The way changes get into master is by submitting a PR.

In a GitHub scenario any developer is usually interacting with three repos:

Forks are created using the GitHub web interface, and they basically create a virtual "copy" of the primary repo in the developer's GitHub workspace. That fork is then cloned to the developer's workstation.

In many corporate environments everyone works in the same repo, but the only way to update master (or dev or a shared branch) is via a PR.

In a corporate scenario developers often interact with just two repos:

The developer clones the primary repo to their workstation.

Whether from a GitHub fork or a corporate repo, cloning looks something like this (at the command line):

$ git clone https://github.com/rockfordlhotka/csla.git

This creates a copy of the repo in the cloud onto the dev workstation. It also creates a connection (called a remote) to the cloud repo. By default this remote is named "origin".

Whether originally from a GitHub fork or a corporate repo, the developer does their work against the clone, what I'm calling the Dev workstation repo in these diagrams.

First though, if you are using the GitHub model where you have the primary repo, a fork, and a clone, then you'll need to add an upstream repo to your dev workstation repo. Something like this:

$ git remote add MarimerLLC https://github.com/MarimerLLC/csla.git

This basically creates a (readonly) connection between your dev workstation repo and the primary repo, in addition to the existing connection to your fork. In my case I've named the upstream (primary) repo "MarimerLLC".

This is important, because you are very likely to need to refresh your dev workstation repo from the primary repo from time to time.

Again, developers do their work against the dev workstation repo. They should do their work in a branch other than master. Mostly work should be done in a feature branch, usually based on some work item in VSTS, GitHub, Jira, or whatever you are using for project and issue management.

Back to creating a branch in the dev workstation repo. Personally I name my branches with the issue number, a dash, and a word or two that reminds me what I'm working on in this branch.

$ git fetch MarimerLLC
$ git checkout -b 123-work MarimerLLC/master

This is where things get a little tricky.

First, the git fetch command makes sure my dev workstation repo has the latest changes from the primary repo. You might think I'd want the latest from my fork, but in most cases what I really want is the latest from the primary repo, because that's where changes from other developers might have been merged - and I want their changes!

The git checkout command creates a new branch named "123-work" based on MarimerLLC/master. So based on the real master branch from the primary repo; the one I just made sure was updated from the cloud to be current.

This means my working directory on my computer is now using the 123-work branch, and that branch is identical to master from the primary repo. What a great starting point for any new work.

Now the developer does any work necessary. Editing, adding, removing files, etc.

One note on moving or renaming files: if you want to keep the file's history intact as you move or rename a file it is best to use git to make the changes.

$ git mv OldFile.cs NewFile.cs

At any point while you are doing your work you can commit your changes to the dev workstation repo. This isn't a "backup", because it is on your computer. But it is a snapshot of your work, and you can always roll back to earlier snapshots. So it isn't a bad idea to commit after you've done some work, especially if you are about to take any risks with other changes!

Personally I often use a Windows shell add-in called TortoiseGit to do my local commits, because I like the GUI experience integrated into the Windows Explorer tool. Other people like different GUI tools, and some like the command line.

At the command line a "commit" is really a two part process.

$ git add .
$ git commit -m '#123 My comment here'

The git add command adds any changes you've made into the local git index. Though it says "add", this adds all move/rename/delete/edit/add operations you've done to any files.

The git commit command actually commits the changes you just added, so they become part of the permanent record within your dev workstation repo. Note my use of the -m switch to add a comment (including the issue number) about this commit. I think this is critical! Not only does it help you and your colleagues, but putting the issue number as a tag allows tools like GitHub and VSTS to hyperlink to the issue details.

OK, so now my changes are committed to my dev workstation repo, and I'm ready to push them up into the cloud.

If I'm using GitHub and a fork then I'll push to my personal fork. If I'm directly using a corporate repo I'll push to the corporate repo. Keep in mind though, that I'm pushing my feature branch, not master!

$ git push origin

This will push my current branch (123-work) to origin, which is the cloud-based repo I cloned to create my dev workstation repo.

GitHub with a fork:

Corporate:

The 123-work in the cloud is a copy of that branch in my dev workstation repo. There are a couple immediate benefits to having it in the cloud

  1. It is backed up to a server
  2. It is (typicaly) visible to other developers on my team

I'll often push even non-working code into the cloud to enable collaboration with other people. At least in GitHub and VSTS, my team members can view my branch and we can work together to solve problems I might be facing. Very powerful!

(even better, but more advanced than I want to get in this post, they can actually pull my branch down onto their workstation, make changes, and create a PR so I can merge their changes back into my working branch)

At this point my work is both on my workstation and in the cloud. Now I can create a pull request (PR) if I'm ready for my work to be merged into the primary master.

BUT FIRST, I need to make sure my 123-work branch is current with any changes that might have been made to the primary master while I've been working locally. Other developers (or even me) may have submitted a PR to master in the meantime, so master may have changed.

This is where terms like "rebase" come into play. But I'm going to skip the rebase concept for now and show a simple merge approach:

$ git pull MarimerLLC master

The git pull command fetches any changes in the MarimerLLC primary repo, and then merges the master branch into my local working branch (123-work). If the merge can be done automatically it'll just happen. If not, I'll get a list of files that I need to edit to resolve conflicts. The files will contain both my changes and any changes from the cloud, and I'll need to edit them in Visual Studio or some other editor to resolve the conflicts.

Once any conflicts are resolved I can move forward. Even if there weren't conflicts I'll need to commit the merged changes from the cloud into my local repo.

$ git add .
$ git commit -m 'Merge upstream changes from MarimerLLC/master'

It is critical at this point that you make sure the code compiles and that your unit tests run locally! If so, proceed. If not, fix any issues, then proceed.

Push your latest changes into the cloud.

$ git push origin

With the latest code in the cloud you can create a PR. A PR is created using the web UI of GitHub, VSTS, or whatever cloud tool you are using. The PR simply requests that the code from your branch be merged into the primary master branch.

In GitHub with a fork the PR sort of looks like this:

In a corporate setting it looks like this:

In many cases submitting a PR will trigger a continuous integration (CI) build. In the case of CSLA I use AppVeyor, and of course VSTS has great build tooling. I can't imagine working on a project where a PR doesn't trigger a CI build and automatic run of unit tests.

The great thing about a CI build at this point is that you can tell that your PR builds and your unit tests pass before merging it into master. This isn't 100% proof of no issues, but it sure helps!

It is really important to understand that there is an ongoing link from the 123-work branch in the cloud to the PR. If I change anything in the 123-work branch in the cloud that changes the PR.

The upside to this is that GitHub and VSTS have really good web UI tools for code reviews and commenting on code in a PR. And the developer can just go change their 123-work branch on the dev workstation to respond to any comments, then

  1. git add
  2. git commit
  3. git push origin

as shown above to get those changes into the cloud-based 123-work branch, thus updating the PR.

Assuming any changes requested to the PR have been made and the CI build and unit tests pass, the PR can be accepted. This is done through the web UI of GitHub or VSTS. The result is that the 123-work branch is merged into master in the primary repo.

At this point the 123-work branch can (and should) be deleted from the cloud and the dev workstation repo. This branch no longer has value because it has been merged into master. Don't worry about losing history or anything, that won't happen. Getting rid of feature branches once merged is necessary to keep the cloud and local repos all tidy.

The web UI can be used to delete a branch in the cloud. To delete the branch from your dev workstation repo you need to move out of that branch, then delete it.

$ git checkout master
$ git branch -D 123-work

Now you are ready to repeat this process from the top based on the next work item in the backlog.

Git
Thursday, 23 August 2018 16:00:41 (Central Standard Time, UTC-06:00)  #    Disclaimer

Does anyone understand how System.Data.SqlClient assemblies get pulled into projects?

I have a netstandard 2.0 project where I reference System.Data.SqlClient. I then reference/use that assembly in a Xamarin project. And this seems to work, but creates a compile-time warning in the Xamarin project

The assembly 'System.Data.SqlClient.dll' was loaded from a different 
  path than the provided path

provided path: /Users/user135287/Library/Caches/Xamarin/mtbs/builds/
  UI.iOS/4a61fb5d59d8c2875723f6d1e7f44ce3/bin/iPhoneSimulator/Debug/
  System.Data.SqlClient.dll

actual path: /Library/Frameworks/Xamarin.iOS.framework/Versions/
  11.6.1.4/lib/mono/Xamarin.iOS/Facades/System.Data.SqlClient.dll

I don't think the warning actually causes any issues - but (like a lot of people) I dislike warnings during my builds. Sadly, I don't know how to get rid of this particular warning.

I guess I also don't know if it has anything to do with my Class Library project using System.Data.SqlClient, or maybe this is just a weird thing with Xamarin iOS?

Thursday, 23 August 2018 15:15:31 (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, 27 June 2018

A while back I blogged about how to edit a collection of items in ASP.NET MVC.

These days I've been starting to use Razor Pages, and I wanted to solve the same problem with the newer technology.

In my case I'm also making sure the new CSLA .NET 4.7.200 CslaModelBinder type works well in this, among other, scenarios.

To this end I wrote a CslaModelBinderDemo app.

Most of the interesting parts are in the Pages/MyList directory.

Though this sample uses CSLA, the same concepts should apply to any model binder and collection.

My goal is to be able to easily add, edit, and remove items in a collection. I was able to implement the edit and remove operations on a single grid-like page.

I chose to do the add operation on a separate page. I first implemented it on the same page, but in that implementation I ran into complications with business rules that make a default/empty new object invalid. By doing the add operation on its own page there's no issue with business rules.

Domain Model

Before building the presentation layer I created the business domain layer (model) using CSLA. These are just two types: a MyList editable collection, and a MyItem editable child type for the objects in the collection.

The MyItem type is a little interesting, because it implements both root and child data portal behaviors. This is because the type is used as a child when in a MyList collection, but is used as a standalone root object by the page implementing the add operation. In CSLA parlance this is called a "switchable object".

Configuring the model binder

In the Razor Pages project it is necessary to configure the app to use the correct model binder for CSLA types. The default model binders for MVC and now .NET Core all assume model objects are dumb DTO/entity types - public read/write properties, no business rules, etc. Very much not the sort of model you get when using CSLA.

The new CslaModelBinder for AspNetCore fills the same role as this type has in previous ASP.NET MVC versions, but AspNetCore has a different binding model under the hood, so this is a totally new implementation.

To use this model binder add code in Startup.cs in the ConfigureServices method:

      services.AddMvc(config =>
        config.ModelBinderProviders.Insert(0, new Csla.Web.Mvc.CslaModelBinderProvider(CreateInstanceAsync, CreateChild))
        ).SetCompatibilityVersion(CompatibilityVersion.Version_2_1);

An app can have numerous model binders. The model binder providers indicate which types a binder should handle. So the CslaModelBinderProvider ensures that the CslaModelBinder is used for any editable business object types (basically BusinessBase or BusinessListBase subclasses).

Notice that two parameters are provided to CslaModelBinderProvider: something to create root objects, and something to create child objects.

These are optional. If you don't provide them, CslaModelBinder will directly create instances of the appropriate types. But if you want to have some control over how the instances are created then you need to provide these parameters (and related implementations).

Root and Child instance creators

In my case I want to make sure when my root collection is instantiated, that it contains all existing data.

Remember that the model binder is invoked on page postback, when the data is flowing from the browser back into the Razor Page on the server. All the collection data is in the postback, but it also exists in the database.

Basically what we're doing in this scenario is merging the changed data from the browser into the data from the database. I could maintain the collection in some sort of Session store, but in this app I'm choosing to load it from the database each time:

    private async Task<object> CreateInstanceAsync(Type type)
    {
      object result;
      if (type.Equals(typeof(Pages.MyList.MyList)))
        result = await Csla.DataPortal.FetchAsync<Pages.MyList.MyList>();
      else
        result = Csla.Reflection.MethodCaller.CreateInstance(type);
      return result;
    }

Of course the collection contains child objects, and the postback provides an array of data, with each row in the array corresponding to an object that exists in the collection.

On postback, step 1 is that the root collection gets created (via the FetchAsync call), and then each row in the postback array needs to be mapped into an existing (or new) child object in the collection.

The CreateChild method grabs the Id value for the current row from the postback and uses that value to find the existing child object in the collection. If that child exists it is returned to CslaModelBinder for binding. If it isn't in the collection then a new instance of the type is created so that child can be bound and added to the collection.

    private object CreateChild(System.Collections.IList parent, Type type, Dictionary<string, string> values)
    {
      object result = null;
      if (type.Equals(typeof(Pages.MyList.MyItem)))
      {
        var list = (Pages.MyList.MyList)parent;
        var idText = values["Id"];
        int id = string.IsNullOrWhiteSpace(idText) ? -1 : int.Parse(values["Id"]);
        result = list.Where(r => r.Id == id).FirstOrDefault();
        if (result == null)
          result = Csla.Reflection.MethodCaller.CreateInstance(type);
      }
      else
      {
        result = Csla.Reflection.MethodCaller.CreateInstance(type);
      }
      return result;
    }

The result is that CslaModelBinder "creates" a new collection, but really it gets a pre-loaded instance with current data. Then it "creates" a new child object for each row of data in the postback, but really it gets pre-existing instances of each child object with existing data, and then the postback data is used to set each property on the object.

The beauty here is that if the postback value is the same as the value already in the child object's property, then CSLA will ignore the "new" value. But if the values are different then the child object's IsDirty property will be true so it will be saved to the database.

Adding a new child to the collection

It is certainly possible to add a new child object to the collection like I did in the previous ASP.NET MVC blog post. The drawback to that approach is that this new child may have business rules that complicate matters if it is created "blank" and added to the list.

So in this case I decided a better overall experience might be to have the user add an item via a create page, and do edit/remove operations on the index page.

The Create.cshtml page is perhaps the simplest scenario. The Razor was created by scaffolding. Nothing in this view is unique to this problem space or CSLA. It is just a standard create page.

The Create.cshtml.cs code behind the page is a little different from code you might find for Entity Framework, because I'm using CSLA domain objects. This just means that the OnGet method uses the data portal to retrieve the domain object.

    public async Task<IActionResult> OnGet()
    {
      MyItem = await Csla.DataPortal.CreateAsync<MyItem>();
      return Page();
    }

And the OnPostAsync method calls the SaveAsync method to save the domain object.

    public async Task<IActionResult> OnPostAsync()
    {
      if (!ModelState.IsValid)
      {
        return Page();
      }

      MyItem = await MyItem.SaveAsync();

      return RedirectToPage("./Index");
    }

Finally, the MyItem property is a standard data bound Razor Pages property.

    [BindProperty]
    public MyItem MyItem { get; set; }

The important thing to understand is that MyItem is a subclass of BusinessBase and so the CslaModelBinderProvider will direct data binding to use CslaModelBinder to do the binding for this object. Because CslaModelBinder understands how to correctly bind to CSLA types, everything works as expected.

Editing and removing items in the collection

Now we get to the fun part: creating a page that displays the collection's contents and allows the user to edit multiple items, mark items for deletion, and then click a button to commit the changes.

Interestingly enough, the Index.cshtml.cs code isn't complex. This is because most of the work is handled by CslaModelBinder and the two methods we already implemented in Startup.cs. This code just gets the domain object in OnGetAsync and saves it in OnPostAsync.

    [BindProperty]
    public MyList DataList { get; set; }

    public async Task OnGetAsync()
    {
      DataList = await Csla.DataPortal.FetchAsync<MyList>();
    }

    public async Task<IActionResult> OnPostAsync()
    {
      foreach (var item in DataList.Where(r => r.Remove).ToList())
        DataList.Remove(item);
      DataList = await DataList.SaveAsync();
      return RedirectToPage("Index");
    }

Notice how the Remove property is used to identify the child objects that are to be removed from the collection. Because this is a CSLA collection, this code just needs to remove these items, and when SaveAsync is called to persist the domain object's data those items will be deleted, and any changed data will be updated or inserted as necessary.

The Index.cshtml page is a bit different from a standard page, in that it needs to display the input fields to the user, and make sure everything is properly connected to each item in the collection such that a postback can form all the data into an array.

The key part is the for loop that creates those UI elements in a table.

  @for (int i = 0; i < Model.DataList.Count; i++)
  {
    <tr>
      <td>
        <input type="hidden" asp-for="DataList[i].Id" />
        <input asp-for="DataList[i].Name" class="form-control" />
        <span asp-validation-for="DataList[i].Name" class="text-danger"></span>
      </td>
      <td>
        <input asp-for="DataList[i].City" class="form-control" />
        <span asp-validation-for="DataList[i].City" class="text-danger"></span>
      </td>
      <td>
        <input asp-for="DataList[i].Remove" type="checkbox" />
        <label class="control-label">Select</label>
      </td>
    </tr>
  }

Instead of a foreach loop, this uses an index to go through each item in the collection, allowing the use of asp-for to create each UI control.

Make special note of the hidden element containing the Id property. Although this isn't displayed to the user, the value needs to round-trip so it is available to the server as part of the postback, or the CreateChild method implemented earlier wouldn't be able to reconcile existing child object instances with the data in the postback array.

Summary

Quick and easy editing of a collection is a very common experience users expect from apps. Although the standard CRUD scaffolding implements all the right behaviors, as a user it is tedious to edit several rows of data if you have to navigate to multiple pages for each row. The approach in this post doesn't solve every UX need, but when quick editing of multiple rows is required, this is a good answer.

Thanks to Razor Pages data binding, implementing this approach is not difficult.

Wednesday, 27 June 2018 16:27:06 (Central Standard Time, UTC-06:00)  #    Disclaimer
 Monday, 18 June 2018

In my microservices presentations at conferences I talk about APIs like this. I go into more depth in my presentations in terms of the background, but these are the high level points of that section of the talk.

From 1996 with the advent of MTS, Jaguar, and EJB, a lot of people create a public service API with endpoints like this pseudo-code:

int MyService(int x, double y)

That is not a service, that is RPC (remote procedure call) modeling. It is horrible. But people understand it, and the technologies have supported it forever (going back decades, and rolling forward to today). So LOTS of people create "services" that expose that sort of endpoint. Horrible!!

A better endpoint would be this:

Response MyService(Request r)

At least in this case the Request and Response concepts are abstract, and can be thought of as message definitions rather than types. Not that hardly anybody thinks that way, but they should think that way.

With this approach you can at least apply the VB6 COM rules for evolving an API (which is to say you can add new stuff, but can't change or remove any existing stuff) without breaking clients.

However, that is still a two-way synchronous API definition, so achieving things like fault tolerance, scaling, and load balancing is overly complex and WAY overly expensive.

So the correct API endpoint is this:

void MyService(Request r)

In this case the service is a one-way call, that can easily be made async (queued). That helps the mental adjustment that Request is a message definition. It also makes it extremely easy and cost-effective to get fault tolerance, scaling, and load balancing, because the software architecture directly enables those concepts.

Monday, 18 June 2018 09:54:09 (Central Standard Time, UTC-06:00)  #    Disclaimer
On this page....
Search
Archives
Feed your aggregator (RSS 2.0)
October, 2018 (1)
September, 2018 (3)
August, 2018 (3)
June, 2018 (4)
May, 2018 (1)
April, 2018 (3)
March, 2018 (4)
December, 2017 (1)
November, 2017 (2)
October, 2017 (1)
September, 2017 (3)
August, 2017 (1)
July, 2017 (1)
June, 2017 (1)
May, 2017 (1)
April, 2017 (2)
March, 2017 (1)
February, 2017 (2)
January, 2017 (2)
December, 2016 (5)
November, 2016 (2)
August, 2016 (4)
July, 2016 (2)
June, 2016 (4)
May, 2016 (3)
April, 2016 (4)
March, 2016 (1)
February, 2016 (7)
January, 2016 (4)
December, 2015 (4)
November, 2015 (2)
October, 2015 (2)
September, 2015 (3)
August, 2015 (3)
July, 2015 (2)
June, 2015 (2)
May, 2015 (1)
February, 2015 (1)
January, 2015 (1)
October, 2014 (1)
August, 2014 (2)
July, 2014 (3)
June, 2014 (4)
May, 2014 (2)
April, 2014 (6)
March, 2014 (4)
February, 2014 (4)
January, 2014 (2)
December, 2013 (3)
October, 2013 (3)
August, 2013 (5)
July, 2013 (2)
May, 2013 (3)
April, 2013 (2)
March, 2013 (3)
February, 2013 (7)
January, 2013 (4)
December, 2012 (3)
November, 2012 (3)
October, 2012 (7)
September, 2012 (1)
August, 2012 (4)
July, 2012 (3)
June, 2012 (5)
May, 2012 (4)
April, 2012 (6)
March, 2012 (10)
February, 2012 (2)
January, 2012 (2)
December, 2011 (4)
November, 2011 (6)
October, 2011 (14)
September, 2011 (5)
August, 2011 (3)
June, 2011 (2)
May, 2011 (1)
April, 2011 (3)
March, 2011 (6)
February, 2011 (3)
January, 2011 (6)
December, 2010 (3)
November, 2010 (8)
October, 2010 (6)
September, 2010 (6)
August, 2010 (7)
July, 2010 (8)
June, 2010 (6)
May, 2010 (8)
April, 2010 (13)
March, 2010 (7)
February, 2010 (5)
January, 2010 (9)
December, 2009 (6)
November, 2009 (8)
October, 2009 (11)
September, 2009 (5)
August, 2009 (5)
July, 2009 (10)
June, 2009 (5)
May, 2009 (7)
April, 2009 (7)
March, 2009 (11)
February, 2009 (6)
January, 2009 (9)
December, 2008 (5)
November, 2008 (4)
October, 2008 (7)
September, 2008 (8)
August, 2008 (11)
July, 2008 (11)
June, 2008 (10)
May, 2008 (6)
April, 2008 (8)
March, 2008 (9)
February, 2008 (6)
January, 2008 (6)
December, 2007 (6)
November, 2007 (9)
October, 2007 (7)
September, 2007 (5)
August, 2007 (8)
July, 2007 (6)
June, 2007 (8)
May, 2007 (7)
April, 2007 (9)
March, 2007 (8)
February, 2007 (5)
January, 2007 (9)
December, 2006 (4)
November, 2006 (3)
October, 2006 (4)
September, 2006 (9)
August, 2006 (4)
July, 2006 (9)
June, 2006 (4)
May, 2006 (10)
April, 2006 (4)
March, 2006 (11)
February, 2006 (3)
January, 2006 (13)
December, 2005 (6)
November, 2005 (7)
October, 2005 (4)
September, 2005 (9)
August, 2005 (6)
July, 2005 (7)
June, 2005 (5)
May, 2005 (4)
April, 2005 (7)
March, 2005 (16)
February, 2005 (17)
January, 2005 (17)
December, 2004 (13)
November, 2004 (7)
October, 2004 (14)
September, 2004 (11)
August, 2004 (7)
July, 2004 (3)
June, 2004 (6)
May, 2004 (3)
April, 2004 (2)
March, 2004 (1)
February, 2004 (5)
Categories
About

Powered by: newtelligence dasBlog 2.0.7226.0

Disclaimer
The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.

© Copyright 2018, Marimer LLC

Send mail to the author(s) E-mail



Sign In