Tag Archives: ASP.NET

Building a Dynamic Platform with ASP.NET, MVC and Azure

Software development tooling and techniques are moving at the fastest pace ever.  It seems you can’t get through the week without a new tool, service and way of build software!

The biggest issue I have with learning about these new technologies is one of context – it’s very difficult to learn something unless you have an idea of a problem that the tool/service/methodology addresses.

With this in mind I often find it useful to have an idea or product I want to build.

So, over a series of blog posts I’m going to start building out a new platform, and as I go along, I will use a variety of tools and technologies, and in fact I will often refactor (e.g. rebuild) as we go along.

This last point is quite an important one.  A common approach in a world that moves so fast you need to get products to market in the shortest possible timeframe is to build an MVP – not to be confused with Microsoft Valued Professional, MVP in this instance refers to Minimal Viable Product.

Screenshot 2019-05-31 at 11.14.08

An MVP is the most basic, working, production version of a product that can be built and deployed as fast as possible.  The idea is build fast, fail fast.  If a product looks like it is being picked up you can spend time and money building it out with new features or even rebuilding as required to create a better service. On the flipside if the product bombs you’ve not wasted too much time on it! Generally this is done over a series of iterations – you build the core most important bits first and release them, over time we add to them as requirements demand.

So that’s how I will build my app – I will build services using the most common technologies for speed of development, and then rebuild them using more up to date technologies. We’ll also build out as well as refactoring by adding new functionality and services as we go.

The app itself will essentially be a development platform – think Force.com, MS Dynamics or ServiceNow. I want to be able to create a platform that itself will allow me to build apps within it using no code.

We’ll be building it as a set of services, and plug them up as we go along, thus creating additional functionality.  We’ll start simple and small, and slowly get more complex as we go.

The first set of components we build will be;

  • Data Modeler– be able to define data objects through the system itself, rather than defining your own table schemas.
  • Form Designer– be able to build forms using the models we’ve defined.
  • Workflows– be able to build workflows using a GUI.These will start as simple ‘send and email’, and gradually we will build up more complex options.
  • Security– secure the platform using OAuth and ensure this flows through the system.

These fundamental to the system, and after we have these, we’ll start to look at what other options we can add.  I won’t go into these now because a) I haven’t actually decided! And b) I’d like you give you my audience the chance to make suggestions!

These will be built in layers as follows;


We will have a main UI – this would be the end user portal, but then we’ll have separate UIs for each individual service.  The aim here is that each service should be able to run independently, however the platform GUI would seamlessly bring them together and make them look like a single UI.

At the bottom we have a series of databases, and then we wrap these databases in individual services that expose and manage that data.  The UI should be completely agnostic of the underlying database – so if we changed the technology, we should only need to update the service.

Any comms between services would be performed by a message broker – i.e. there should be no direct comms either to other services or the databases themselves, in this way we decouple (i.e. remove dependencies) as much as possible.

So that’s the overview – let’s move on to the first task – building the modeller.

The Data API

Our platform needs to support a dynamic data model.  By this I mean that our models should be defined by an advanced user within our system itself.  We therefore need to build a service that allows us to define a data model as a template.

From a traditional tables and fields perspective we’ll have a Schema as a holder for a set of attributes or elements, which will have 1 or more Schema elements which will define that type of data we are storing.

We will then have a Model object that stores user data, with one or more Model Elements that store the actual field level data.  Our model data will hold intrinsic links to the underlying schema as one without the other is useless.  Thus, in a traditional development sense our data structure looks like this;


Lost?  OK let’s run through this concept and compare what we want to do to a normal development project using SQL Server.

Imagine we need to store a customer record.  To do this we’d create a table called Customers.  That table has a column for Id, Customer Name and City (OK this is a REALLY simple database).

It might look something like this;

Customers Table

Field Data Type
CustomerName NVARCHAR(MAX)

In our platform we won’t build a SQL table to store customers, we will first define the Customer Model in the Schema/SchemeElements tables, and any data for that customer will be stored in a Model/ModelElements table.

Thus, we first create the Customer Schema;

Schema Table

Field Data Type Value
Name NVARCHAR “Customer”

SchemaElements Table

Field Data Type Value
Name NVARCHAR “CustomerName”
DataType NVARCHAR “String”
Name NVARCHAR “City”
DataType NVARCHAR “String”

So here in our Schema table we define our Customer Model with an internal SchemaId, and a name which is Customer.  We then create a SchemaElement record that defines the two properties we want to store for the customer – CustomerName and City.  For each we link them back to the Schema table using the SchemaId, and set the DataType to string – note we are using c# context here not SQL syntax (i.e. we say string, not NVARCHAR).

When user’s of our system then start to store data they will store the data in the Model and ModelElements tables.  And because these tables link back to the Schema and SchemaElements tables we can make sense of what that data is.

So as a final visualisation – let’s look at how our actual SQL tables with some actual data in them might look like.


And then if we want to query across these tables to bring back a list of customers and their element values, we could write a query like this;

SELECT[Schema].Name asSchemaName,SchemaElements.Name asElement,SchemaElements.DataType,ModelElements.Value


SchemaElements INNERJOIN

[Schema] ONSchemaElements.SchemaId =[Schema].Id INNERJOIN

Models ON[Schema].Id =Models.SchemaId ONModelElements.ModelId =Models.Id ANDModelElements.SchemaElementId =SchemaElements.Id

Which would return the following;


Therefore, actual data we are recording consists of the data itself and a linked template.  In our app we’ll need a couple of screens – the first is an admin page that allows us to define the model, the second is a dynamic form that wraps itself around a model.

But we’re getting ahead of ourselves.  The first thing we will build is an API that will allow us to build a model or template.

I’ll start by creating a new project in GitHub – I’m actually going to be doing this twice – once for my blog and again for my Youtube channel.

If you’d like to see this series on YouTube check it out here https://www.youtube.com/c/completecoder

And the GitHub link for the code for this BLOG is here


In the next Lecture we’ll start the actual coding! Don’t forget to sign up to be notified of updates.

Entity Framework & Direct SQL

Entity Framework is a great time saver.

When I think back to the days when I had to manually construct SQL statements, parse the results, control updates, and of course ensure my actual database design was in sync with what my code expected – well it sends a shiver down my spine!

I remember back in the day I even built myself a little utility that would take my database and spit out boiler plate code for basic CRUD operations.

But then EF was released – and although it was a little bit flakey at first (with people tutting and muttering about NHibernate), it was still a lovely little thing.  These days of course I use it all the time.  In fact as you may have seen in one of my earlier posts I’ve even copied more than a little for an Azure Table Storage version.

And of course couple EF with decent patterns such as the repository pattern and Dependency Injection and you have a rock solid foundation.

But, (there’s always a but) – EF is sometimes a bit slow when compared with issuing SQL commands directly – especially with batch operations.

For this reason the EF context exposes a Database property which in turn exposes a number of options for issue SQL commands directly.

I will show you how to use the two most common ones – SqlQuery and ExecuteSqlCommand

As I like to decouple as much as possible, and because I like my interfaces to only depend on core libraries I’m going to hide away some of the EF specific stuff.

So, first of all I like to have an IRepository<TEntity> interface and a Repository<TEntity> base class, that way each repository gets the basic CRUD methods and anything else I might want;

public interface IRepository where TEntity: class
void Delete(object id);
void Delete(TEntity entity);
System.Linq.IQueryable GetAll();
System.Linq.IQueryable GetAll(object filter);
IPagedResponseViewModel GetPaged(int take, int skip, string orderBy, bool orderByAscending, object filter);
TEntity GetById(object id);
TEntity GetFullObject(object id);
void Insert(TEntity entity);
void Update(TEntity entity);
void Commit();
void Dispose();
// Direct SQL Stuff
int ExecuteSqlCommand(string sql, object[] parameters);
ICollection SqlQuery(string sql, object[] parameters);
int ExecuteSqlCommand(string sql);
ICollection SqlQuery(string sql);
 public abstract class RepositoryBase : IRepository where TEntity : class
        internal DataContext context;
        internal DbSet dbSet;

        public RepositoryBase(DataContext context)
            this.context = context;
            this.dbSet = context.Set();

        public virtual int ExecuteSqlCommand(string sql)
            return context.Database.ExecuteSqlCommand(sql);
        public virtual int ExecuteSqlCommand(string sql, object[] parameters)
            return context.Database.ExecuteSqlCommand(sql, parameters);
        public virtual ICollection SqlQuery(string sql)
            return context.Database.SqlQuery(sql).ToList();
        public virtual ICollection SqlQuery(string sql, object[] parameters)
            return context.Database.SqlQuery(sql, parameters).ToList();
        public virtual void Commit()
        public virtual void Dispose()

As you can see all we are really doing is encapsulating the SqlQuery and ExecuteSqlCommand methods by taking in a string or a string and a list of parameters.

First lets look at ExecuteSQLCommand – as this is very straight forward, we simply pass in our SQL string such as

ExecuteSQLCommand("UPDATE SomeTable SET SomeColumn='Some Value'");

EF issues the command and returns an int indicating the number of affected rows.
If you want you can pass in parameters like this

ExecuteSQLCommand("UPDATE SomeTable SET SomeColumn='Some Value' WHERE ID=@p0", new object[]{3});

Now for SQLQuery. You’ll may notice SQLQuery uses TEntity (if you’re not familiar with Generics we use TEntity to refer to any class we pass in during the instantiation of our repository – therefore anything that refers to TEntity refers to whatever object we want to use – e.g. we would have

public class MyObject{
    int Id { get; set; }
    string someProperty { get; set; }

public class MyRepository: Repository<MyObject>{

so when then instantiate the actual repository

var myrepo = new MyRepository();

We get all the commands from the base Repository class referencing our MyObject class model.

Anyway, if we ignore our repository for now, if we were to just query Database.SQLQuery directly we’d use

Database.SQLQuery<MyObject>("SELECT * FROM MyObjects");

this will result in a list of MyObject – EF actually attempts to convert the results it receives back to the model you pass in.
So all we have done now is automate that because we already know the model from when we instantiated our repository – thus when calling our encapsulated method we just use

SQLQuery("SELECT ALL FROM MyObjects");

Again we can pass in parameters just like with SQLExecuteCommand.

So as you can see this gives you complete flexibility in using EF – as mentioned earlier – if I have to iterate through and update large datasets I sometimes construct my SQL Directly and use these methods instead.

Note : This post was created in response to a user question on my course about ASP.NET Development Techniques.  My Blog viewers can get the course for just $10 by clicking this link