Your applications in a Cloud World

The existing method of controlling user accounts and workstations in another mayor (and smaller) company is usually based on the proven technology of Active Directory. The advantage of domain joined workstations is that it is easy for IT to impose limits and enable features that make it easy for users to start working. Mapped drives, favorites, color schemes and placing shortcuts and desktop items can be managed centrally. A default “image” ensures that all workstations are deployed in a consistent way and little effort is required if any changes occur. In the past years many additional applications for managing workstations have been released, from the System Center Suite to Manage Engine and Specopsoft. The downside of domain joined devices is the need for direct network access either on a local network or via VPN. It also opens up another attack vector, certainly in the mobile world that we live in today. And finally (and most importantly),  when genZ comes to our company do they expect a fully locked down machine provided by the company or instead feel free to choose their own device on which they prefer to work  (which they always did when they were still in school)?

Corporate Laptop Deployments…

With more applications moving towards cloud however, a new architecture of managing identities and workstations becomes available. An architecture that still uses the power of Active Directory in the background, but removes the need for your workstations to be domain joined.

This new way of absorbing this was introduced with Windows 10 by Microsoft, but also applies to older Windows versions such as Windows 8. In theory, the following would be applicable in the vision: A user purchased a new device and wishes to utilize this device for access to corporate applications. The user clicks the Workplace Join button and provides the username and password to enroll the device. Policies are automatically loaded and a Company Store with possible applications is displayed. The user can choose which application to install and start to work.

To realize this vision, some changes however need to be made to the identity services and the way most companies publishes their internal applications.

Traditional way of publishing applications for internal / external and corporate / BYOD devices..

The image above shows the logical architecture of applications today in a large enterprise. On the left are the users that are in Active Directory. These users use corporate devices from multiple locations. Usually only domain joined devices (or as we call them ‘controlled’) are accessing applications from the corporate LAN. Other devices are BYOD based and access applications from any location. When publishing applications, administrators often have to take into account if devices are domain joined or based on BYOD, protect external access with many appliances and security software, while internal users can connect directly over the LAN. Additional authentication mechanisms are based on MFA or VPN access.

This is because administrators and architects use the default external-DMZ-internal network layout that has been imprinted into our mindsets for years. Firewalls and WAF devices are added to the datacenter to protect external access to the applications. Federation services are added to provide authentication to cloud based applications and all in all, a lot of supporting data center infrastructure such as IP addresses, WAN connections and much more.

Now as the high-level picture shows, there are a lot of components required when publishing applications but the main (very old) underlying idea is the fact that internal users are trusted, and external users are untrusted.

The idea of cloud is that your applications are always available via the internet on any device through any location. This is completely different than the traditional way of looking at applications and connectivity. The traditional way is to enable a firm front-end firewall and block or inspect all inbound traffic into the “corporate network” as well as controlling what is leaving the company using proxy services in a DMZ network. This boxed model no longer upholds in the cloud world we live in today as we can see when looking at cloud services being used by corporation (SalesForce, Office365, etc). Users login to the network (or wish to use corporate services) from any location around the world, making their work day more flexible and allowing them to work whenever/wherever they want.

Some companies provide anywhere access through the use of VPN technologies. However, the ideology is still completely different. The VPN technology allows the end user to virtually connect to the corporate network and then be fully part of that network (fully trusted), while being remote. The cloud connectivity allows users to use any device (without VPN) that only requires an internet connection to utilize services.

This new way of connecting creates a change in the corporate network infrastructure in the long term. While for the next few years this corporate network will remain, services will be removed from it and moved to the cloud where they are made readily available.

The above however implies that these new cloud services are isolated from the corporate network, which certainly during the journey to a full cloud adoption is not the case. Cloud services will (mostly) require access to on-premises services to get or push data from/to. The move of data which currently mostly resides in the corporate network into the cloud means that network connectivity must be established. Even when applications do not move to the “cloud” at all, they can still be made available via this new “cloud” mindset and architecture. Web applications that do not use federated logins but rely on Kerberos or HTTP Header based authentication can be made available in this new cloud architecture to internet corporate users via some tricks.

In short, every application should be designed/implemented as a cloud based application wherever it is hosted. By designing all applications in this new architecture allows companies to fully utilize the flexibility of authentication providers and data providers. And be fully flexible in where this application is hosted at any given time.

In an architecture where the users do not rely on Active Directory anymore and where local applications are not published to the internet via local firewalls and Web Application Firewalls there is no need to publish these applications via a public IP address on the datacenter either. Closing any inbound connections increases the security of the Datacenter as any connection inbound is via controlled agents. Furthermore, any connection is then ideally pre-authenticated via the cloud provider. With Azure Active Directory, the use of Azure AD Proxy will implement this architecture. Users need to authenticate to Azure AD prior to any connection being established to the application in the backend datacenter.

The image above shows the high-level implementation at many customers. While Azure AD is used for accessing Office 365 and sometimes some of the SaaS applications; many other applications are published via a WAF in the DMZ. Internal users however only access these applications via a load balancer as their workstations are treated as “trusted” and regulated. The WAF exposes the datacenter to the outside world by listening to opened ports for traffic to come in. This means that the WAF now also needs DDOS protection and other advanced capabilities to keep the bad guys out that may be able to access the internal (very open) LAN.

When we are looking at a newer cloud based approach to publishing applications to the corporate network (and internet) we see that there are 4 basic layers in the datacenter. The first is the access layer. How are your users going to connect to your applications? This can be service endpoints (IP addresses published in the access layer, but assigned to the WAF or load balancer) for internal users on the corporate network. External users will use AAD Proxy (if required in combination with Ping Access) in the Access Layer. No-one will be able to connect to the access layers from externally without pre-authentication which will be handled by Azure Active Directory. As you can see, there are no published public IP addresses for the applications. Access to the applications is either via Azure AD proxy or directly internally.

The idea behind the architecture is; any user beyond the access layer is treated the same. External or internal, corporate device or BYOD.

The access layer always goes to the WAF or in many cases just a load balancer. While we stated every user should be treated equally, it is vital here to think of the worst-case scenario. Hence, many companies will use a full WAF to provide additional security and inspection on the traffic.

After the security layer come the actual applications. Each application should be isolated from another to prevent application to application infection. If one application gets compromised the layers in between should prevent any attacker to access another application.

And then there are the backend services on which the applications depend. Active Directory (in case of Kerberos), DNS and management services. The authentication agents are added to support Pass-through Authentication and can be removed from this architecture if it is only reliant on password replication. Note that I did not add ADFS into the picture. ADFS has the requirement that you need to publish this service to the outside without authentication (you need to authenticate against ADFS so pre-authentication would not work). Because of that architecture you are again exposing your datacenter by opening listening ports for this service.

Which brings us to our final picture..

My ideal architecture here is drafted on the bottom portion of the picture. Users use their Azure Active Directory account on any device they choose. They can work from any location (removing the need for MPLS based connections for end-user services in branch offices). Their devices are either domain joined, workplace joined or if not, they can use (risk based) MFA to access services. The applications (on-premises, cloud based apps) are either available through the Azure AD proxy (for external) or over the internal LAN. Or when accessing SaaS services utilizing nothing but the internet without the need for any corporate network connection.

So lets give new young employees the freedom they desire. Let’s get rid of those “corporate image based” laptop deployments. Open up your application landscape and see that releasing yourself from those ancient architectures actually brings you better and improved security, while providing freedom to your employees…

Tagged , , ,