Nanoservices : The future of software architecture
Can Kafka be used for Video Streaming?
Kafka was developed around 2010 at LinkedIn by a team that included Jay Kreps, Jun Rao, and Neha Narkhede. Apache Kafka is a distributed publish-subscribe messaging system in which multiple producers send data to the Kafka cluster and which in turn serves them to consumers. In the publish-subscribe model, message producers are called publishers, and one who consumes messages is called as subscribers. Kafka has a robust queue that handles a high volume of data and passes data from one point to another. Kafka prevents data loss by persisting messages on disk and replicating data in the cluster.
Kafka Architecture:
Topic: A stream of messages of a particular type is called a topic.
Producer: A Producer is a source of data for the Kafka cluster. It will publish messages to one or more Kafka topics.
Consumer: A Consumer consumes records from the Kafka cluster. Multiple consumers consume or read messages from topics parallelly.

Brokers: Kafka cluster may contain multiple brokers. A broker acts as a bridge between producers and consumers. A Kafka cluster may contain 10, 100, or 1,000 brokers if needed. Each Kafka broker has a unique identifier number.
Record: Messages Sent to the Kafka are in the form of records. It is a key-value pair.
ZooKeeper: It is used to track the status of Kafka cluster nodes. It also maintains information about Kafka topics, partitions, etc.
Kafka Cluster: A Kafka cluster is a system that comprises different brokers, topics, and their respective partitions. Data is written to the topic within the cluster and read by the cluster itself.
Who uses Kafka ?
A lot of companies adopted Kafka over the last few years. I will list some of the companies that use Kafka.
1) Netflix
Netflix uses Kafka clusters together with Apache Flink for distributed video streaming processing.
2) Pinterest
Pinterest uses Kafka to handle critical events like impressions, clicks, close-ups, and repins. According to Kafka summit 2018, Pinterest has more than 2,000 brokers running on Amazon Web Services, which transports about 800 billion messages and more than 1.2 petabytes per day, and handles more than 15 million messages per second during the peak hours.
3) Uber
Uber requires a lot of real-time processing. Uber collects event data from the rider and driver apps. Then they provide this data for processing to downstream consumers via Kafka.
4) LinkedIn
Apache Kafka originates at LinkedIn. Linked uses Kafka for monitoring, tracking, and user activity tracking, newsfeed, and stream data.
5) Swiftkey
Swiftkey uses Kafka for analytics event processing.
Apart from the above-listed companies, many companies like Adidas, Line, The New York Times, Agoda, Airbnb, Oracle, Paypal, etc use Kafka.
Why can Apache Kafka be used for video streaming?
- High throughput – Kafka handles large volume and high-velocity data with very little hardware. It also supports message throughput of thousands of messages per second.
- Low Latency – Kafka handles messages with very low latency in the range of milliseconds.
- Scalability – As Kafka is a distributed messaging system that scales up easily without any downtime. Kafka handles terabytes of data without any overhead. It can scale up to handling trillions of messages per day.
- Durability – As Kafka persists messages on disks this makes Kafka a highly durable messaging system. Also one of another reason for durability is message replication due to which messages are never lost.
Other reasons to consider Kafka for video streaming are reliability, fault tolerance, high concurrency, batch handling, real-time handling, etc.
Neova has expertise in message broker services and can help build micro-services based distributed applications that can leverage the power of a system like Kafka.
References :
- https://kafka.apache.org/powered-by
- https://kafka.apache.org/documentation/
- https://blog.softwaremill.com/who-and-why-uses-apache-kafka-10fd8c781f4d
Atmosphere Framework: A complete walk through
Atmosphere:
The Atmosphere Framework contains client and server-side components for building Asynchronous Web Applications. Realtime client-server framework for the JVM, supporting WebSocket and cross-browser fallbacks support. The Atmosphere Framework supports all major Browsers and Servers. The Atmosphere framework is the most popular asynchronous application development framework for enterprise Java.
Atmosphere’s Java Client is called wAsync.
Why to choose Atmosphere Framework?
Based on the requirement we need a framework that supports bi-directional communication so, we have selected Atmosphere Framework. The main advantage is that it provides two-way communication between client and server over a single TCP connection. Generally, a client will send a request to the server then the server returns the response. For every request from the same client to the same server, a new connection needs to be opened. WebSocket will maintain a single connection between the client and the server. It also keeps the connection alive with all it’s clients until they disconnect.
Tomcat Configuration Steps:
Used runtime – native as atmosphere dependency for maven tool and tomcat -v9 as a server
Step 1: Download and Install Tomcat.
Step 2: Create an Environment Variable JAVA_HOME.
Step 3: Configure the Tomcat Server.
Step 4: Start Tomcat Server.
Step 5: Develop and Deploy an App.
Maven Tool:
A maven is a build tool that provides different dependencies. Here dependency means the external libraries required to build a project.
It does the following:
- Generates source code (if auto-generated code is used).
- Generates documentation from source code.
- Compiles source code.
- Packages compiled code into JAR or ZIP file.
- Installs the packaged code in a local repository, server repository, or central repository.
How to Broadcast Message from JAVA server to JAVA Client in Atmosphere Framework:
- Used Atmosphere Framework
- Used runtime-native as atmosphere dependency for maven tool
- Tomcat-v9 as a server
Long polling :
Long polling is the simplest way of having a persistent connection with a server that doesn’t use any specific protocol like WebSocket or Server Side Events. Easy to implement, and delivers messages without delays.
The Flow:
- A request is sent to the server.
- The server doesn’t close the connection until it has a message to send.
- When a message appears – the server responds to the request with it.
Server:
Broadcasting the messages to clients-
BroadcasterFactory.getDefault().lookup(“URL to broadcast”, true).scheduleFixedBroadcast(message, 2, TimeUnit.SECONDS);
Client:
We have used the AsyncHttpProvider library to establish a connection with the Server in async mode.
AtmosphereRequest request = atmosphereResource.getRequest();
String IncomingMessage = request.getReader().readLine();
To use Atmosphere, add the following dependency:
1.
<dependency> <groupId>org.atmosphere</groupId> <artifactId>atmosphere-runtime</artifactId> <version>2.4.21</version> </dependency
2.
<dependency>
<groupId>org.atmosphere</groupId>
<artifactId>atmosphere-spring</artifactId>
<version>2.4.3</version>
</dependency>
Benefits of Atmosphere:
- Provide High Availability.
- Scalability.
- Fault Tolerance.
Conclusion:
The Atmosphere framework makes the development easier to build applications. The Atmosphere Framework is portable and can be deployed to any web server.
Are Cloud-based Directory Services replacing Active Directory?
Active directory turns 20 this year!
There is a lot of talk in the community about whether AD is outdated and whether other Cloud-based directory services will replace AD. Before we jump to any conclusion, lets first understand AD in its entirety.
Active Directory (AD) is a Microsoft’s Directory Service for Windows domain networks. Active Directory handles centralized domain management and directory-based identity-related services. It is a framework on which other services such as Certificate Services, Federated Services deployed.
How does Active Directory work?
Active Directory stores data as objects. An object is an element, which may be user, group, application, or device, such as a printer. Objects are general resources –such as printers or computers — or security principals — such as users or groups.
The server which runs Active Directory Domain Service(AD DS) is called a domain controller. Authenticating and authorizing all users and computers in a Windows domain type network is done by AD DS. When a user login into a computer then the active directory checks the submitted user and password and determines whether the user is a system administrator or user. It manages and stores information, provides authentication and authorization mechanisms, and makes a framework to deploy related services: Certificate Services, Active Directory Federation Services, Lightweight Directory Services, and Rights Management Services.
Active Directory Services :
Following are the services provided by Active Directory :
1. Domain Services – stores information of members such as users, devices, their access rights, and credentials. The server running this service is called a domain controller.
2. Lightweight Directory Services – it is an implementation of LDAP protocol
3. Directory Federation Services – It is a single-sign-on (SSO) service. It may be useful when a user is registered with several web services with the same credentials. So federation services enable us to use the same credentials for different networks.
4. Certificate Services – It creates and validates public key certificates for an organization. We can use these certificates to encrypt files, emails, and network traffic.
5. Rights Management – It provides development and management tools to help organizations to protect information.
AD objects :
The individual component of an organization is called Objects in Active Directory. Active Directory stores data as objects. Following is the list of AD Objects :
- Contact: A contact object is used to store the contact of vendors or suppliers, who are not in the employ of the organization. Only the name of the person and the contact details are stored.
- User: Member of the organization in the AD is denoted by a User. The user contains information about the first name, last name, email address, and associated groups.
- Printer: This object contains information about all the printers in the network
- Computer: This object contains information about all the computers in the network
- Shared folder: This allows users to access folders from other computers on the network that have been marked as shared. Only folders, and not individual files, can be shared. If you want to share an individual file then it should be placed inside a shared folder.
- Group: It is a collection of directory objects. It contains computers, users, other groups, and other AD objects. Group has 2 types
- Distribution groups – used with emails application to send emails to the collection of users.
- Security groups– used to assign access to resources on the network
E.g For your organization, if you want to give access to certain documents to only particular departments. Network administrators will create a group containing all members of the department and provide them to access file servers containing that document.
- Organizational units (OUs): OUs help in structuring your network resources in an easy to locate manner. In OU you can place users, printers, computers, groups, and other OUs.Each domain can create its own OUs.
- Builtin: Several user accounts and group accounts are automatically created when you install an active directory for the first time.
Benefits of Active Directory :
Here are some benefits of AD:
- Central Storage– Active Directory provides a centralized storage repository for users’ files. If you save your files on the central server then other users of the domain can access them.
- Better Backup– If a user’s machine is attacked by a cyberattack, all of the files on that machine may become inaccessible. However, if they were saved to a central storage location, it would be easily recovered from central storage.
- Cut Costs– Active Directory is easy to scale up or down.
- Improved Security– As the network administrator has control over the domain in AD, they can implement new security measures when necessary. This can include installing new antivirus software onto each machine, or making sensitive documents inaccessible so they don’t fall into the wrong hands.
Cloud-based Directory Services:
There have been many new entrants in this space with the likes of Okta, JumpCloud, and others that are providing alternatives to what was the market-leading directory service. Enterprises are more distributed than ever before and applications are being deployed in Cloud making it imperative to have a directory service that can be centrally administered and managed. Microsoft with its Federated AD service in Azure (ADFS) has provided an extension of AD in the cloud. Okta, a pure SaaS service provider, is another alternative for businesses looking at a cloud based SSO solution.
Neova has expertise in Active Directory, Azure ADFS, Okta, and JumpCloud and can help organizations integrate their application with one or multiple of these directory service providers.
How to deploy an app on Heroku?
What is Heroku
Heroku is a cloud platform – Platform as a Service which is a container-based fully managed system on AWS. Data Services, Ad-ons, Plug-ins are fully integrated into Heroku for deployment and execution of modern applications.
The Heroku provides the developer-centric approach which helps users to be associated closely with the tools and workflows integrated with it.
Heroku is a Platform as a Service built on top of AWS, which provides a ready runtime environment and servers which eventually benefit development and DevOps teams for seamless integration to different development tools.
How does it work?
When the code is pushed to Heroku Git, a slug (VM) gets created and it launches the VM (called dyno) to validate and update the environment with the required libraries for the code pushed.
The dynos are nothing but lightweight Linux containers.
When you push your code to the Heroku container, it converts it into a slug and autoruns all your migrations and scripts. Your application is hosted.
How it’s different
The platform is built keeping in mind that developers can integrate applications easily. Heroku is a fully managed platform that helps developers focus more on application development.
You can provision a new application on Heroku in seconds. Deploying your code and restarting your processes typically take just a few minutes.
Advantages
- Heroku provides the facility to add application config variables and change the way your app behaves.
- Heroku has an inbuilt OS and related services. There is no need to configure the infrastructure required for the application. A novice person can do the deployment without focusing on infrastructure.
- Easy subscription plans to enroll as per the requirements.
- Wonderful live dashboard for application/platform performance monitoring.
- Notification/Alerting system for any critical/configured spikes
- Heroku provides add-ons for all your needs. You name it, they have it.
- Heroku Scheduler to schedule any task that you want to run on a regular basis.
- Support from Modern Open Source Languages (Node.js, Java, Ruby, PHP, Python, Go, Scala, Clojure)
- No need for a DevOps Engineer to maintain the Heroku platform.
Try yourself:
5 Steps to deploy your application on Heroku:
Create a personal app:
1. Login to Heroku
Under personal apps, click on create a new app
Choose a personal app under team drop down and add the app. name
Click on create an app
2. Install Heroku
$ sudo add-apt-repository “deb https://cli-assets.heroku.com/branches/stable/apt ./” $ curl -L https://cli-assets.heroku.com/apt/release.key sudo apt-key add – $ sudo apt-get update $ sudo apt-get install Heroku
3. Login to Heroku using: Heroku login
4. Create a Heroku directory in local
$ git remote add Heroku your_heroku_repo_url $ git remote -v This should have two remotes origin pointing to your git and Heroku $ pointing to your Heroku repository
5. To deploy your application
$ git push heroku master: this will deploy your code on Heroku
Challenges using Heroku
- Inbound and outbound latency is high.
- Heroku does not allow you to run any other services on dynos.
- It proves to be expensive for large and high-traffic apps.
- Limited in types of instances
- Not ideally suited for heavy-computing projects.
Conclusion:
Big Brands like Toyota Europe, Citrix, Westfield, Yesware, and SaleForce are a few examples who use the Heroku platform.
We have an expert team of engineers who can help you implement Heroku. Please feel free to connect in case of any queries.
Containers and Docker: Using .NET
What is Docker?
Docker is an open-source platform for developing, shipping, and running applications which enables to separate applications from infrastructure, and software that can deliver quickly. Docker provides many benefits such as runtime environment isolation, consistency via code, and portability.
What are Containers?
Containers are the organizational units of Docker. When we build an image and start running it; we are running in a container. The container analogy is used because of the portability of the software we have running in our container. The isolation and security allow you to run many containers simultaneously on a given host. There is no need for an extra load of a hypervisor so containers are lightweight, & it runs directly within the host machine’s kernel.
Difference between virtual machines and dockers:
Virtual Machine | Docker Container | |
Process isolation | Hardware-Level | OS Level |
Sharing of OS | Each VM has a separate OS | Each Container can share OS |
Booting time | Boots in minutes | Boots in seconds |
Size | In few GBs | Lightweight(in KBs/MBs) |
Availability | Ready-made VMs are difficult to find | Pre-built docker containers are easily available |
Docker Engine
Docker Engine lets you to develop, assemble, ship, and run application usage of the following components:
1. Docker Daemon
It is a persistent background process that listens for Docker API requests and processes them & manages Docker images, containers, networks, and storage volumes.
2. Docker Engine REST API
An API utilized by programs to engage with the Docker daemon; it could be accessed with the aid of an HTTP client.
3. Docker CLI
A command-line interface purchaser for interacting with the Docker daemon. It substantially simplifies how you manage container times and is one of the key reasons why developers love the use of Docker.
Docker architecture
- Docker uses a client-server architecture.
- The Docker patron talks to the Docker daemon, which does the heavy lifting of building, running, and dispensing your Docker containers.
- The Docker patron and daemon can run on the same system, or you may join a Docker consumer to a remote Docker daemon.
- The Docker client and daemon talk about the use of a REST API, over UNIX sockets, or a community interface.
1. Docker daemon
A chronic background process that manages Docker images, containers, networks, and storage volumes. The Docker daemon constantly listens for Docker API requests and methods them.
2. Docker client
Docker users can engage with Docker via a client. When any docker commands run, the client sends them to docker daemon, which carries them out. Docker API is used by Docker commands. It is possible for Docker clients to communicate with more than one daemon.
3. Docker registries
A Docker registry stores Docker images. There are public and private registries. Docker has a public registry called Docker hub, where you could also store photos privately.
If you are using Docker Datacenter (DDC), it includes Docker Trusted Registry (DTR).
docker pull or docker run commands, images are pulled from the configured registry.
docker push command, image is pushed to the configured registry.
Docker objects
While working with Dockers, we use the following Docker objects.
Images
Docker images are read-most effective templates with instructions to create a docker container. Docker images may be pulled from a Docker hub and used or you may add additional instructions to the base image and create a new and modified docker image. Using dockerfile users can create their own docker images.
Containers
Using Docker API or CLI users can create, start, stop, move, or delete a container. Containers are runnable instances of image.
After you run a docker image, it creates a docker container. All the applications and their surroundings run inside this container. Use Docker API or CLI to start, stop, delete a docker container.
Networking
Docker implements networking in an application-driven manner and provides various options even as maintaining enough abstraction for utility developers. There are basically two styles of networks available – the default Docker network and user-described networks. By default, you get 3 specific networks on the set up of Docker – none, bridge, and host.
Storage
Data can store within the writable layer of a container but it requires a storage driver. Being non-persistent, it perishes whenever the container is not running. So it is not easy to transfer this data. Docker supports the following four options for persistent storage:
- Data Volumes
- Data Volume Container
- Directory Mounts
- Storage Plugins
.NET AND DOCKER
Containers offer a lightweight way to isolate your software from the relaxation of the host system, sharing simply the kernel, and the usage of sources given on your software.
Build a .NET Core image
You can build and run a .NET Core-based container image using the following instructions:
docker build --pull -t dotnetapp docker run --rm dotnetapp Dockers in .Net Core
You can use the docker images command to see a listing of your image, as you can see in the following example.
% docker images dotnetapp
REPOSITORY | TAG | IMAGE ID | CREATED | SIZE |
dotnetapp | latest | baee380605f4 | 14 seconds ago | 189MB |
Package an ASP.NET Core app in the container:
To package an ASP.NET Core app in a container, there are 3 steps.
- Create ASP.NET Core project
- Write a Dockerfile that will describe how to construct your image
- Create a container in order to make your image alive or permit the execution of your image like one process.
Create your ASP.NET Core Project
Step 1: Run the following commands in the Command Prompt,
mkdir dockerapp
cd dockerapp
dotnet new webapi
After that, you will have a functional API. To test it, run this command.
dotnet build
dotnet run
Type below address to get the values from ValuesController –
localhost: 5000/api/values
Here we have our WebAPI that returns [“value1”, ”value2”] in JSON.
Step 2: Deployment Environment
Here, we do not need a compiler because we will have to build our application externally from our image, integrate all built files in the image, and just use a lightweight image containing the .NET Core runtime Microsoft/aspnetcore to execute our application.
Copy and paste below instructions in your Dockerfile:
#Development Environment Process
FROM Microsoft/aspnetcore-build:latest AS build-env
WORKDIR /app
#copy csproj and restore distinct layers, make sure that we all dependencies
COPY *.csproj ./
RUN dotnet restore
#Copy everything else and build the project in the container
COPY . ./
RUN dotnet publish --configuration Release --output dist
#Deployment Environment Process
#Build runtime image
FROM microsoft/aspnetcore:latest
WORKDIR /app
COPY --from=build-env /app/dist ./
EXPOSE 80/tcp
ENTRYPOINT [ "dotnet","dockerapp.dll" ]
Step 3: In command prompt type:
docker build. –t rootandadmin/dockerapp –f Dockerfile
Step 4: Create a Container
Now, we have our image rootandadmin/dockerapp but as this image is not active we should make it alive. For this we need to create a container.
There is a big difference between Image and Container; a container is a process that contains an image and executes it.
To create a container from our image type below command:
docker create command –p 555:80 –name rootandadmin/dockerapp01 rootandadmin/dockerapp
docker start rootandadmin/dockerapp01
Access your API in the browser using the following address:
localhost: 555/api/values
Benefits of containerization
Containers solved an important problem: how to make sure that software runs correctly when it is moved from one computing environment to another.
Agile methodologies are based on frequent and incremental changes in the code versions, such that frequent testing and deployment are required.
DevOps engineers frequently move software from a test environment to a production environment and to ensure that the required resources for provisioning and getting the appropriate deployment model are in place, and also validating and monitoring the performance.
The initial solution for this was Virtualization. Virtualization allows multiple operating systems to be run completely independently on a single machine.
The virtualization Idea is extended using containers. In virtualization, the hypervisor creates and runs multiple instances of an operating system so that multiple operating systems can be run on a single physical machine sharing the hardware resources.
The container model eliminates hypervisors entirely. Instead of hypervisors, containers are essentially applications, and all their dependencies packaged into virtual containers. Each application shares a single instance of the operating system and runs on the “bare metal” of the server.
Advantages of Container:
- All the containers share the resources of the single operating system and there is no virtualized hardware. Since the operating system is shared by means of all containers, they’re much more light-weight than traditional virtual machines. It’s possible to host far more containers on a single host than fully-fledged virtual machines.
- Containers share a single operating system kernel start-up in a few seconds, instead of the minutes required to start-up a virtual machine.
- Containers are also very easy to share.
Conclusion
Docker is a mature technology that helps you package your applications. It reduces the time necessary to bring applications to production and simplifies reasoning about them. Furthermore, Docker encourages a deployment style that’s scripted and automatic. As such, it promotes reproducible deployments.
Polly – A .NET resilience and transient-fault-handling library
Nowadays cloud-based, microservice-based, or Internet-of-Things (IoT) applications every so often depend on communicating with other systems across an unreliable network, which leads to unavailability or unreachability for these systems due to transient faults such as network problems and timeouts, being offline, under load or maybe non-responsive.
Polly, a .NET resilience and transient-fault-handling library, offers multiple resilience policies which enable software architects to design suitable reactive strategies for handling transient faults, and also proactive strategies for promoting resilience and stability. Through this post, I will walk you through all the policies/strategies/approaches which Polly library offers to handle transient faults.
Reactive transient fault handling approaches
1. Retry
These short-term faults typically correct themselves after a short span of time, and a robust cloud application should be prepared to deal with them by using a strategy like the “Retry pattern”.
Technically Retry allows callers to retry operations in the anticipation that many faults are short-lived and may self-correct; retrying the operation may succeed maybe after a short delay.
Waiting between retries, allows faults to self-correct. For example, Practices such as ‘Exponential backoff’ and ‘jitter’ enhance this by scheduling retries to avoid them becoming sources of further load or spikes.
2. Circuit-Breaker
There can also be circumstances where faults are because of unexpected events that might take much longer to fix themselves. In these situations, it might be useless for an application to continually retry an operation that is unlikely to succeed. As an alternative, the application should be coded to accept that the operation has failed and handle the failure accordingly.
Using Http retries in these situations could lead to creating a Denial of Service (DoS) attack within your own software. Therefore, you need a defence barrier so that excessive requests stop when it is not worth to keep trying. That defence barrier is precisely the “Circuit Breaker”.
How does circuit-breaker work?
A circuit breaker design pattern perceives the level of faults in calls placed through it and prevents calls when a configured fault threshold is reached.
The circuit breaker can have the following states:
- Closed to Open
- Open to Half-Open
- Half-Open to Open
- Half-Open to Closed
When faults exceed the threshold, the circuit breaks itself (Opens). An open circuit will result in call failure placed through it instead of being actioned and throws an exception immediately. Which means that call is not attempted at all. Thus, it will significantly protect both, a faulting system from an extra load, and allows the working system to avoid placing calls which are unlikely to succeed. Failing instantly in this scenario usually also promotes a better user experience.
After a certain configured time has been elapsed, the circuit moves to a half-open state, where the next call will be considered as a trial to determine the faulty system’s well-being. Based on this trial call, the breaker decides whether to close the circuit (resume normal operation) or break it again.
Note: Circuit-breaker implementation in software systems is like what is in electrical wiring; substantial faults will ‘trip’ the circuit, protecting systems regulated by the circuit.
3. Fallback
A Fallback policy defines how the operation should react in case, even with retries – or because of a broken circuit – the underlying operation fails repeatedly. How much well resilience-engineered your system, failures likely to occur. Fallback means defining a substitute as what you will do when that happens. Its a plan for failure, rather than leave it to have unpredictable effects on your system.
Proactive transient fault handling approaches
Retry and Circuit-Breaker are primary approaches for resilience to transient faults, however, both are reactive, as they react after the failure response to a call has been received.
What will happen if…???
- Response never comes
- The response so delayed that we do not wish to continue waiting
- Waiting-delay / Waiting-to-React logic could itself have problems
Therefore, to handle the above questions, is it possible to be more proactive in our approach to resource management and resilience?
Consider a scenario, wherein a high-throughput system, many calls can be put through to a recently failed system before the first timeout is received. For illustration (Example one), with 100 calls/second to a faulted system that is configured with a 10-second timeout, 1000 calls could have been placed before the first timeout is received. A circuit-breaker will react to this situation as soon as defined threshold failures are reached, but till then a resource bulge has certainly occurred and cannot be backtracked.
While Retry and Circuit-Breaker are reactive; Timeout, Bulkhead, and Caching policies configurations allow pre-emptive and proactive strategies. High-throughput systems can achieve increased resilience by explicitly managing load for stability. We will walk through each of these in the below sections one by one.
1. Timeout
Timeout allows callers to walk away from an awaiting call. It will improve resilience by setting callers free when a response seems unlikely.
Moreover, as the above given ‘Example one’ data demonstrates, opting for timeouts can influence resource consumption in a faulting, high-throughput system.
These kinds of scenarios often lead to blocking up of threads or connections, the memory those awaiting calls consume, and which causes further failures. Consider how long you want to let your awaiting calls consume these costly system resources.
2. Bulkhead
The excessive load is one of the main reasons for system instability and failure. Therefore, building resilience into systems involves explicitly dealing with that load and/or pro-actively scaling to support it.
Excessive load can either be due to:
- Genuine external demand, for example, spikes in user traffic
- Or maybe due to faulting scenarios, where large numbers of calls back up.
Bulkhead policies promote stability by directly managing load and thus resource consumption. The Polly Bulkhead limits parallelism of calls placed through it, with the option to queue and/or reject excessive calls.
- Bulkhead as isolation : Enforcing a bulkhead policy, a parallelism limit, around one stream of calls, limits the resource that stream of calls can consume. If that stream faults, it cannot consume all resources in the host and thus save bringing down the whole host.
- Bulkhead as load segregation : More than one Bulkhead policy can also be enforced in the same process to accomplish load segregation and relative resource allocation.
A good comparison will be the check-out lanes of supermarkets, where there are often distinct lanes for “baskets only” as opposed to full shopping carts. This segregation permits for ‘baskets-only’ always to check-out quickly. Otherwise, they could be blocked waiting behind an excess of full shopping carts.
Configuring multiple bulkhead policies to separate software operations delivers similar advantages, both in terms of relative resource allocation for different call streams; and in ensuring one kind of call cannot exclusively prevent another. - Bulkhead as load-shedding : Bulkhead policies can also be configured to proactively reject calls beyond a certain limit.
Why actively reject calls when the host might yet have more capability to service them?
The answer is it depends on whether you prefer managed or unmanaged failure. Recommending explicit limits enables your systems to fail in predictable and testable ways. Intentionally ignoring the system’s capacity to the server does not mean there is not a limit; it just shows you do not know where the limit ends. Thus, your system is liable to unpredictable, unexpected failures.
3. Cache
Whatever that lessens network traffic, and overall call duration; in turn, rises resilience and improves user experience.
Caching can be configured with multiple caches – in-memory/local caches/distributed caches – in combination. Polly CachePolicy provisions multiple caches in the same call, using PolicyWrap.
Conclusion: This post walks us through multiple configurable resilience policies offered by Polly Library. Neova has the expertise to implement in business applications to make them fault resilient.
Note: For code level details you can reach GitHub community demonstrating all these configurations in detail.
Top 10 Application Security Risks and How to prevent those
Application Security is a major concern for business organizations today. It exposes customer data, monetary transactions, and other sensitive business information to the outside world. Thus, it is among the core concerns for security professionals and businesses today. With unforeseen circumstances, there is no way to guarantee 100% security, although there are certain approved methods which organizations can practice diminishing app security challenges.
Through this post, we will understand the essentials of Application Security. It will cover what is Application Security, why is it important. Then followed by major security attacks/threats an app can confront and the proposed best possible solutions to prevent those.
What is Application Security?
Application security comprises measures taken at the application level itself to enhance the security of any software application often by finding, fixing and preventing security vulnerabilities like Cross Site Scripting (XSS), SQL Injection ,Cross Site Request Forgery (CSRF).
Application security mainly encompasses the security considerations which take place during Application Design and Development, but it also entails procedures and methodologies to safeguard apps after they get deployed into the production environment. It can be enforced using hardware, software, and procedures which recognize or reduce security vulnerabilities.

Why is application Security crucial?
Application security is not optional anymore, it has become inevitable. Nowadays, almost every business is exposed to the outside world through internet-connected applications, consequently, there are several reasons why application security is important to any business. These range from maintaining a sound market reputation and brand naming, to preventing security breaches which could impact the trust that your clients and shareholders have in your business.
What recent case studies reveal?
Veracode, a software application security company, have published a growing number of organizations, from small to large, falling victim to cyberattacks, resulting in data security breaches as well as hefty financial losses to the affected parties.
Another shocking stats from, Veracode’s State of Software Security Vol. 10 report, 83% of the 85,000 applications verified found at least one security flaw. As per research they found a total of 10 million flaws, and 20% of all apps infected with at least one high severity flaw. Not all those flaws pose a substantial security risk, however the sheer number draws attention.
Above discussed, the alarming figure raises numerous questions, one of which is whether companies are doing their level best to safeguard customer information and prevent it from falling into the wrong hands, and why they should do so. Below are outlined some benefits all companies gain from application security, and reasonably be a driving force to tighten up their application security without any further delay.
- Protect Brand Image: – By envisioning security and preventing leaks
- Protect and Build Customer Confidence: – Customer experiences drive competition
- Protect and Safeguard Data: – Both Organizational and Customers
- Winning investor’s and lender’s trust: – Mitigating security risk improves reliability
OWASP TOP 10 VULNERABILITIES
Although the Veracode case studies detected hundreds of software security flaws, we provide a razor focus on finding the problems that fall under OWASP Top 10 list. These flaws are so common and dangerous that no web application should be delivered to customers without some evidence that the software does not contain these errors.
What is OWASP?
The (OWASP) Open Web Application Security Project is an open-source application security non-profit organization with the objective to improve the security of apps. Its industry-standard top 10 guidelines provide a list of the most crucial application security risks to assist developers better securing the applications they design and deploy.
OWASP Top 10 Security Risks and How to prevent those:
The following given identifies each of the OWASP Top 10 Web Application Security Risks and recommends solutions and best practices, to avoid or remediate them.
- Injection
Injection flaws, such as SQL injection, CRLF injection and LDAP injection take place when an attacker sends untrusted data to an interpreter that is executed as a command without proper authorization.
*Application security testing can easily detect injection flaws. Developers ought to use parameterized queries when coding to prevent injection flaws.
- Broken Authentication and Session Management
Improperly configured user and session authentication could permit attackers to negotiate passwords, keys, or session tokens, or take control of user’s accounts to impersonate their identities.
* Multi-factor authentication, such as FIDO or dedicated apps, diminishes the risk of compromised accounts.
- Sensitive Data Exposure
Applications and APIs which do not appropriately protect sensitive data such as usernames, passwords and financial data could allow attackers to access such information to perform fraud or steal user-identities.
* Encryption of data at rest and in transit can assist you to comply with data protection regulations.
- XML External Entity
Inadequately configured XML processors assess external entity references within XML documents. Attackers can make use of external entities for attacks including remote code execution, and to disclose internal files and SMB (Server Message Block) file shares.
* Static application security testing (SAST) can detect this issue by examining dependencies and configuration.
- Broken Access Control
Inappropriately configured or missing restrictions on authenticated users permit them to gain access to unauthorized functionality or data, such as accessing other user’s accounts, viewing sensitive documents, and altering data and access rights.
* Penetration testing is vital for detecting non-functional access controls; other testing methods only detect where access controls are missing.
- Security Misconfiguration
This risk refers to incorrect implementation of mechanisms intended to keep application data safe, such as error messages containing sensitive information (information leakage), misconfiguration of security headers and not updating or patching systems, frameworks, and components.
* Dynamic application security testing (DAST) can identify misconfigurations, such as leaky APIs.
- Cross-Site Scripting
Cross-site scripting (XSS) flaws provide attackers the capability to inject client-side scripts into the application, for example, to redirect users to malicious websites.
* Programmers can be trained to prevent cross-site scripting with best coding best practices, such as encoding data and input validation.
- Insecure deserialization
Insecure deserialization flaws can enable an attacker to execute code within the application remotely, tamper with it, delete serialized objects, elevate privileges and perform injection attacks.
* Application security tools can find deserialization flaws, but penetration testing is frequently required to validate the problem.
- Using Components with Known Vulnerabilities
Developers often do not realize which open source and third-party components are in their applications, making it difficult to update components when new vulnerabilities are discovered. Attackers can take advantage of an insecure component to take over the server or steal sensitive data.
* Software composition analysis performed at the same time as static analysis can detect insecure versions of components.
- Insufficient Logging and Monitoring
The time taken to identify a breach is frequently measured in weeks or months. Inadequate logging and ineffective integration with security incident response systems allow attackers to pivot to other systems and maintain persistent threats.
* Think like an attacker and use pen testing to find out if you have adequate monitoring; inspect your logs after pen-testing.
Conclusion:
We have a team of security experts with knowledge of application security, policies, procedures, guidelines, and ready to assist product companies in securing the application. Please feel free to connect with us at sales@neovatechsolutions.com.
Message Queue Implementation using RabbitMQ
Introduction
Rapidly increasing count of electronic gadgets connected to the internet has given birth to a new IT jargon, Internet-of-things (IoT). It is a system of interconnected computing devices, digital and mechanical machines identified by a unique number/name, and its facility to transfer data over a network without requiring human-to-computer or human-to-human interaction. Therefore, this requirement of improving system integration triggered the development of Message-Brokers, which facilitate the inter-application communication technology to support building a common integration mechanism to support cloud native, microservices-based, serverless and hybrid cloud architectures.
Why Did Message Brokers emerge?
Can you imagine the existing volume of data for gadgets connected to the internet across the globe? Nowadays, approximately 12 billion Smart-Machines are connected to it. Bearing in mind that currently, around 7 billion people live on this planet, it is almost one-and-a-half devices per person. By the end of this year, this number could presumably surge to 200 billion, or even further. With technological development, like the building of Smart-Houses and other automatic systems, our everyday life becomes more and more digitized.
What is a message broker?
Wikipedia’s formal definition says: “A message broker translates a message from the formal messaging protocol of the sender to the formal messaging protocol of the receiver.”

Functional features for a message broker
- A message broker is computer software that enables apps, systems, and services to communicate with each other by exchanging information, irrespective of programming languages used to build them up or software/hardware platforms wherein they are deployed.
- Message brokers are software programs deployed in messaging middleware or message-oriented middleware (MOM) solutions.
- Message brokers can validate, store, route, and deliver messages to the intended destinations. This enables decoupling of services and processes within systems.
- Built-in “Message-Queues” hold intact messages until not delivered to intended destinations and ensure message reliability and guaranteed delivery.
- Message brokers provide queue managers to handle the interactions between multiple message queues, as well as services providing data routing, message translation, persistence, and client state management functionalities.
- Supports Asynchronous messaging, which prevents the loss of valuable data and enables systems to continue functioning even in the face of the intermittent connectivity or latency issues common on public networks.
Message broker implementation patterns
Message brokers can be configured in two basic message distribution patterns:
- Point-to-point messaging: Each message in the queue is sent to only one recipient and is consumed only once, so it represents a one-to-one relationship between the message’s sender and receiver. Suitable use cases for this messaging pattern include payroll and financial transaction processing. In these systems, both senders and receivers need an assurance that each payment will be sent once and once only.

- Publish/subscribe messaging: Often referred to as “Publisher/Subscriber”, wherein the producer of each message publishes it to a queue, and multiple message consumers subscribe to a queue from which they want to receive messages. All messages published to a queue are distributed to all the applications subscribed to it. It is a broadcast-style distribution method, depicting a one-to-many relationship between the message’s publisher and its consumers. For example, an airline was to circulate updates about the landing times or delay status of its flights, multiple parties could make use of the information: ground crews performing aircraft maintenance and refuelling, baggage handlers, flight attendants and pilots preparing for the plane’s next trip, and the operators of visual displays notifying the public. A pub/sub messaging style would be suitable for this scenario.

Note: Third possible pattern could be – Hybrid of both
Best Message Broker Software Tools
These tools are typically leveraged by IT professionals like system administrators, and software developers. Business organizations use this software to synchronise distributed applications, simplify coding dissimilar applications, improve performance, and automate communication-related tasks.
A software tool must possess below given to qualify as a Message Broker:
- Facilitate asynchronous messaging
- Store, deliver, and delete messages
- Document communication information
- Allow administrative control over messaging permissions
Major available tools
- RabbitMQ
- Apache Kafka
- IBM MQ
- Apache ActiveMQ
There are numerous message broker software tools available in the market which can be the best candidate for a given factors like Flexible Message Routing and Priority configurations, Message Replay, UI to monitor messages etc. E.g. RabbitMQ is a great fit for Flexible Message Routing and Priority configurations and Kafka is a great choice for message replay. Next, we will explore RabbitMQ as it covers the maximum share in the current market.
RabbitMQ a most widely deployed open source message broker
RabbitMQ is the most popular open-source message broker, having a huge number of production deployments world-wide. It is lightweight, and easy to deploy on-premises and in the cloud and execute on all major operating systems. It supports multiple messaging protocols, maximum developer platforms and can be deployed in distributed and federated configurations to satisfy high-scale, high-availability requirements.
RabbitMQ Features
Asynchronous Messaging
It supports multiple messaging protocols, message queuing, multiple exchange types, delivery acknowledgement, flexible routing to queues.
Developer Experience
It allows developers to deploy with Docker, BOSH, Chef and Puppet. Cross-language messaging with favourite programming languages such as: .NET, Java, PHP, Python, JavaScript, Ruby, and many others.
Distributed Deployment
Offers Clustered deployment for high availability and throughput and federate across multiple availability zones and regions.
Enterprise & Cloud Ready
It supports pluggable authentication, authorisation as well as supports LDAP and TLS. Lightweight and easy to deploy across public and private clouds.
Tools & Plugins
Variety of plugins and tools supporting continuous integration, operational metrics, and integration with other enterprise systems. Flexible plug-in approach for extending its functionality.
Management & Monitoring
It supports HTTP-API, command line tool, and UI for managing and monitoring itself.
.NET Libraries for RabbitMQ
Below are given two client libraries which help to integrate .NET applications with RabbitMQ very easily.
- RabbitMQ .NET Client (supports .NET Core and .NET 4.5.1+)
- RawRabbit, a higher-level client that targets ASP.NET next and supports .NET Core.
Note: Here in, I will be covering only RawRabbit by highlighting its core features and providing essential code snippets.
RawRabbit
It is a modern .NET framework for communication over RabbitMQ. The modular design and middleware-oriented architecture makes the client highly customizable while providing workable default for topology, routing and more.
Easily Configurable and extendable
RawRabbit can be easily configured with RawRabbitOptions, an option specifying object which allows registering client configuration, plugins, and permits overriding internal services.
var client = RawRabbitFactory.CreateSingleton(new RawRabbitOptions
{
ClientConfiguration = new ConfigurationBuilder()
.SetBasePath(Directory.GetCurrentDirectory())
.AddJsonFile("rawrabbit.json")
.Build()
.Get<RawRabbitConfiguration>(),
Plugins = plugin => plugin
.UseProtobuf()
.UsePolly(config => config
.UsePolicy(queueBindPolicy, PolicyKeys.QueueBind)
.UsePolicy(queueDeclarePolicy, PolicyKeys.QueueDeclare)
.UsePolicy(exchangeDeclarePolicy, PolicyKeys.ExchangeDeclare)
),
DependencyInjection = ioc => ioc
.AddSingleton<IChannelFactory, CustomChannelFactory>()
});
Configuring Publish/Subscribe
Just a few lines of code and you can set up strongly typed publish/subscribe.
var rawClient = RawRabbitFactory.CreateSingleton();
await rawClient.SubscribeAsync<BasicMessage>(async message =>
{
Console.WriteLine($"Received: { message.Prop}.");
});
await rawClient.PublishAsync(new BasicMessage { Prop = "Hello, world!"});
Configuring Request/Response
RawRabbit’s (RPC) request/response configuration uses the direct reply-to feature for better performance and lower resource allocation.
var rawClient = RawRabbitFactory.CreateSingleton();
rawClient.RespondAsync<BasicRequest, BasicResponse>(async request =>
{
return new BasicResponse();
});
var rawResponse = await client.RequestAsync<BasicRequest, BasicResponse>();
Configuring Ack, Nack, Reject and Retry
Unlike many other clients, basic.ack, basic.nack and basic.reject are first class citizens in the message handling.
var client = RawRabbitFactory.CreateSingleton();
await client.SubscribeAsync<BasicMessage>(async message =>
{
if(UnableToProcessMessage(message))
{
return new Nack(requeue: true);
}
ProcessMessage(message)
return new Ack();
});
In addition to the essential acknowledgements, RawRabbit also supports delayed retries
var client = RawRabbitFactory.CreateSingleton();
await client.SubscribeAsync<BasicMessage>(async message =>
{
try
{
ProcessMessage(message)
return new Ack();
}
catch (Exception e)
{
return Retry.In(TimeSpan.FromSeconds(30));
}
});
Granular control over each call
Allow users to add or change properties in the IPipeContext to adapt calls for specific types of messages. This makes it possible to modify the topology features for calls, publish confirm timeout, consumer concurrency.
await subscriber.SubscribeAsync<BasicMessage>(received =>
{
receivedTcs.TrySetResult(received);
return Task.FromResult(true);
}, context => context
.UseSubscribeConfiguration(config => config
.Consume(consume => consume
.WithRoutingKey("CustomKey")
.WithConsumerTag("CustomTag")
.WithPrefetchCount(2)
.WithNoLocal(false))
.FromDeclaredQueue(queue => queue
.WithName("CustomQueue")
.WithAutoDelete()
.WithArgument(QueueArgument.DeadLetterExchange, "dlx"))
.OnDeclaredExchange(exchange=> exchange
.WithName("CustomExchange")
.WithType(ExchangeType.Topic))
));
Conclusion
Neova Solutions Pvt Ltd has the expertise to integrate Message Broker Software, like Kafka, RabbitMQ, into business applications to meet high-scale, high-availability requirements.
Travel and Tourism Industry: Things you should know
Travel and tourism are near to stand-still due to the COVID-19 pandemic and no one knows when it will restart. A future travel will increase growth to the travel insurance. Travel industries will start coming back online and travellers will start going out again. In a nutshell, until the COVID-19 virus is under control, it’s too soon to judge when people can start booking again.
The Travel / Tourism industry comprises all the companies that provide the services and products that are meant and used by tourists.
As the online travel industry continues to grow, competition between travel providers is at peak.
Choosing a domain name is a big decision, especially in the travel industry. Your URL is one of the first things people notice when they look for online travel research.
Key things involved in Travel domain setup
- Get a .com domain name that is more specific to your business
E.g. .travel, .tours
- Keep specific URL to the travel industry
- Domain expiration renewal reminders enabled
- Remove bunk DNS entries
- Domain protected by WAF or web proxy
- HTTPS enabled using strong, publicly trusted certificate
- Mail Exchanger (MX) records set using mail provider
- MTA-STS to increase mail transport security
- TLS-RPT to report on TLS issues with your email
- DomainKeys Identified Mail (DKIM)
- “HSTS for email” using STARTTLS
- DNSSEC authenticates your DNS entries to reduce the likelihood of spoofing or maliciously manipulated entries
Reference – https://www.johndball.com/my-domain-name-security-setup-checklist/
What investment is involved in setting up a travel domain?
- You can pay a host agency $400 to $1,200 to get started
- To start your own independent agency, it may cost you between $1,000 and $10,000, depending on the markets you serve. You can build the Mobile apps for IOS and Android.
- Creating a SAAS software as a service platform we can go for Cloud GCP/AWS/Azure based on requirements and how much scaling we need. If we are having an existing business like having relationships with multiple service providers then we can maintain a common Dashboard.
Strategies to increase Revenue
- Create a good website – It is very important to have an attractive and user-friendly website. Ability to do branding( customizing same functionality for multiple users(operators or TMCS)
- Collaborating with social media – It is a good sales technique to do marketing via social media like Facebook, Twitter, LinkedIn, etc
- Option to choose packages – Combine your services to provide unique experience to your clients
- Request for review – Ask your customers to share their feedback or experiences
- Know your client needs – You need to understand the needs of your customers to speak their language
- Be responsive – Respond to all your customer requests within SLA
Opportunities in Travel Domain
- There are several opportunities in the travel domain, these day many employers give importance to travel/vacations of their employees for the below reasons :
- Morale boost – The promise of a vacation is enough to keep your employee’s morale high
- Stress relief – Encouraging employees to take time offs is enough to recharge their batteries and to help them clear their heads
- Improve productivity – Encouraging your employees to go on vacations can help them in achieving job satisfaction thus leading to higher productivity
- Job requirements – Employers can offer travel to their employees to attend conferences, trade shows, meet-ups, etc
- Redis with Java application: – Employers can offer paid vacations to their employees for certain periods of time
- Opportunity while signing a new client/business – Face-to-face meetings play a vital role in signing the contract.
- Opportunity of Networking – When you travel, you have the chance to meet new people you would never interact with if you were sitting in your office. This may lead to expanding your company’s footprint with the new contract.
- Broadening the order book with the help of existing clients/customers
- Hotel – This is the best opportunity from which you can earn a good return on investment.
- Travel Agency – Establishing a travel agency is a good option if you want to advise your customer about tour packages, help planning their travel by charging a commission to them.
- Vehicle Renting – You can rent your vehicle to tourists so that they can go in the direction they want
- Photography – Photography is another popular option as every tourist wants to create memories of the place they visit. The best option is to tie up with travel agents or hotels to attract customers.
Travel Revenue Growth
- Good Revenue growth is only possible when you sell the right product or service to the right clients at the right price
- Travel industry operates through a large network of interconnected industries which aim to serve the people on travel
- Top source of employment – The travel industry is one of the top source of employment across the globe, generating about 10% of employment
- Economic growth – Most of the countries looks at tourism/travel not just to attract tourists but to develop a platform which supports the economic growth of the country
- Travel industry is the source of foreign exchange earnings
How different industries are dependent on travel domain
- Logistic
- Good transport
- Fleet management

- Globalization has increased the number of passengers/people travelling to foreign countries each year. The tourism industry plays an important role in managing this commercial activity that creates demand and growth for many other industries.
- Insurance – Insurance industry largely depends on travel as they offer insurance in a variety of ways. E.g Insurance for medical emergencies, Luggage insurance etc
- Hotel Industry – Hotels are the popular form of accommodation for people travelling to different locations
- Food and Beverage industry – This caters to the needs of travellers with the provision of food and drink services. This industry is broadly divided into the below categories:
- Restaurants – This is the most common category in the Food and Beverage Industry. Restaurants are of different types i.e. Fast Food chains, Family Restaurants, etc.
- Catering – Provided in many forms including airplanes, ferries, trains, etc
- Bars & Cafes – Bars serves alcoholic beverages along with soft drinks, Cafes serves hot drinks and snacks
- Entertainment industry – The entertainment industry plays a vital role in the travel industry. Entertainment site / Theme parks can sometimes be the reason for travel
- Casino – Casinos are sometimes connected to Resorts and Hotels. They provide the gambling activities like dice games, card games, etc
- Shopping – Shopping is another crucial aspect of the travel industry. This includes Shopping centers, local markets, Duty-free shops etc
- Financial Services – It mainly includes the services related to Currency conversion
- Educational – Many people travel to conferences, training sessions, academic institutions.
- Cultural events – Religious or cultural events keep taking place in the world which attracts a number of tourists/travellers from across the globe
Conclusion:
Travel industry brings people from everywhere in the world. This industry also brings the emerging economies along with job creation and more.
For queries related to Travel domain contact sales@neovatechsolutions.com