I am often asked this question, or at least various variations of it.
First let me say that there is a difference between learning to code, learning a programming language, and learning to program.
Learning to Code
Learning to code means learning the syntax of a language such as C# or Java. It’s about know the basics of code constructs. I.e. how to declare and use variables for temporary storage of data in memory. It’s about the basics of how to create loops, decision trees, iterative functions and so in.
Learning to code is what most people think they mean when they ask ‘How do I learn to program’. Learning to code is actually quite straight forward, there are a few basic concepts, and some rules to follow. If it were too complicated computers wouldn’t understand what you’re trying to tell them to do!
Learning a Language
But when it comes down to it they all store take in input, temporary data in variables, do something with that data using a combination of decision trees, loops and calculations, then return a result.
As languages are often updated the list of commands and functions available to you, and sometimes how you do this, is changing constantly.
Learning to Programming
Finally, we have programming. To many they think this is a combination of the above two. However, for me, programming is about the structure and nature of how you address a particular problem using a programming language.
This last aspect is quite possible the most fluid subject. As the adoption of software development has become more accessible, and by that, I mean you are far more likely to come across someone who can program today as you would have been 20 years ago, then the available patterns, methodologies and technologies has exploded.
Over the past 50 years we’ve gone from relatively simple patterns where by code would literally just be one line after the next, to complex systems involving procedural, functional and object orientated paradigms.
These in turn have led to a range of available patterns and frameworks, with many languages spawning their own. It’s important to understand WHY the landscape has grown so.
50 years ago, computers were not as capable as they are today. It is said that your mobile phone is considerably more powerful than the computers that took the astronauts to the moon! If we tried to write software using the same patterns as we did back then the codebase would become unusable. And in fact, many high profile failures in software projects stem from how they are managed and the way they have been written.
Monolithic is a term often used to describe these older, bug prone patterns. Today we have developers to use a range of patterns and methodologies such as Object Orientated Programming, Model-View-Controller, Model-View-ViewModel, Model-View-Presenter, Dependency Injection, Dependency Inversion, Single Responsibility…. The list goes on and on!
However, all these patterns are about achieving a set of ideals including better quality, less bugs, easier collaboration, faster coding and shorter release cycles. All this in turn results in easier working environments, better estimates, better software and ultimately happier customers.
For me THIS is what programming means, and THIS is the bit that takes time, effort and in truth should be a constant element of learning for any developer throughout their career. The first two bits – coding and language – are actually quite simple and easy. It’s knowing how to program that makes the biggest difference – and unfortunately, it’s developers with this skill that are in short supply (and high demand).
If you’d like to learn to program properly, please checkout my course ASP.NET MVC Complete Training