The new world of composable Enterprises?

A composable enterprise, defined by Gartner as “an organization that delivers business outcomes and adapts to the pace of business change”, relies on the assembly of interchangeable application building blocks. This architectural overhaul has largely been driven by a demand for more configurable application experiences, and the need to evolve existing application portfolios that are often too risky and costly to replace.

New business opportunities require agility from application portfolios; however, many enterprises are still limited in their ability to adapt. Why? They rely on monolithic ERP systems and cumbersome legacy applications with static processes and haphazard structures. A modular setup can enable a business to rearrange as required depending on external or internal factors, such as shifts in consumer attitudes or sudden supply chain disruptions. Organizations are experiencing these shifts now and require a new approach to enterprise applications to be able to adapt. 

Bridge the CX Gap with Greater Composability

A composability approach is the best way to capture all the advantages of modern enterprise software. According to a recent Boomi report, by 2023 organizations that have adopted a composable approach will outpace the competition by 80% in the speed of new feature implementation. This shift requires enterprises to rethink how they architect their operations, harness a combination of packaged functions and technologies and successfully deliver seamless moments of service to their customers.

There are three key ways that businesses can make the composability shift:

  • Adopting a service mindset
  • Scaling the delivery of microservices
  • Packaging business capabilities using Application Programming Interfaces (APIs). 

1. Designing for Service: Compete on Outcomes, not on Products 

A composable enterprise whose portfolio is made up of heterogeneous applications from a palette of best-of-breed solutions allows organizations to address key inflection points throughout every customer, product or service lifecycle.

As more consumers demand continuous value and reliability throughout an asset’s lifetime, businesses must shift to selling outcomes and experiences instead of products to meet these new expectations for quality service. This requires each part of an operation to align, not around immediate sales or revenue, but around delivering a quality Moment of Service™ — the inflection point where everything comes together to create better value and outcomes for customers. 

Moving to a servitized model requires a composable stack to deliver services to order. At a systems level, this transformation requires organizations to adapt applications dynamically and deliver positive customer experiences with effective quality management, customer support, and access to complete information about the service offering. Unlike traditional enterprise software, this involves connecting data and applications that have often sat in separate silos. A composable enterprise can provide a service-orientated architecture, that enables businesses to become outcome-based and ready to quickly adapt to future disruptions. 

2. Scale Component-based Architecture with Microservices

If applications are built as loosely coupled services, then companies can employ a composable architecture to capitalize on independently deployable modules that are organized around business capabilities. This allows organizations to swap modules in and out, to suit emergent needs and build a well-structured best-fit solution for their unique business. 

In contrast to a monolithic architecture, businesses can easily add resources to the most needed microservice rather than having to scale the entire application as demand for an application increases. In practice, this allows businesses to simplify customizable workflows and optimize business processes while leveraging and applying tools such as robotic process automation, artificial intelligence, or the plethora of hyper-automation capabilities available today.

3. Embrace Open-ended APIs to Maximize Data-sharing Capabilities

Packaged business capabilities (PBC) assembled using APIs are the foundation of every composable enterprise. They are used by businesses to secure data across cloud services, business systems and mobile applications. Historically, APIs have been used in monolithic applications to exchange data between the entire application and external applications and services. In composable software, APIs exchange data from individual modules to external applications and within the application — from module to module. This has significant implications for how systems are designed and built.

APIs can provide a controlled and consumable method for connecting and sharing consumer and business data by creating experiences tailored to individual needs. For instance, an API-centric model can secure and manage data access to help businesses with faster decision-making and deliver relevant new services adaptable to market changes. In the future, there will be more automated continuous process improvements as machine learning models recommend, or even proactively make, process changes to improve outcomes for the business and end customer.

Opt-in to the Composability Evolution

To overcome the limitations of monolithic applications, businesses must rethink their approach to enterprise applications — starting with the business architecture and technology stack. A composable software architecture enables organizations to address the internal and external pressures that send shockwaves throughout the value chain. 

As a composable enterprise, organizations can re-engineer their businesses to ensure customer touchpoints and stages come together for better moments of service, but companies must be certain that processes are optimized across each of these inflection points to mitigate issues and fuel growth. An IT architecture built on a foundation of composability will be essential to the successful delivery of a software-powered business development strategy that can provide continual value to customers and the business itself.

Credit – Rick Veague at CTO Universe


Microservice Architecture Pattern for Architects

This blogpost explains about advantages, disadvantages, internal service communication, SOA vs MSA, Prerequisites, and other aspects of Microservice Architecture that are required to define architecture your application with Microservices.

  1. Origin of Microservices – The term Microservices was referred first time by James Lewis in a workshop for software architects in March 2012. In 2013-14 Microservices was well known term for Architects from large enterprises. In 2014 James Lewis and Martin Fowler described Microservices as:A microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies
  2. Prior to Microservices Architectural StylePrior to Microservices Architectural Style, the monolithic architecture and Service Oriented Architecture(SOA) was widely used for large enterprise applications. The term monolithic architecture was borrowed from the Unix world and used for standalone programs whose functionality is not dependent on any other features.Monolithic ArchitectureIn Monolithic Architecture, you will have different components for User Interface, Business Logic, Database Access in one application. Each component is dependent on the other.
    • User Interface: handles all user events through HTML, client side library like jQuery or Angular, JSON.
    • Business logic: is responsible to validate all business rules provided by user inputs, user events, and data passed to and the from database.
    • Database Access: gives you simplified access to the database that is persistent storage of application data.Disadvantages of Monolithic Architecture
    • InterModule Dependency: Components of Monolithic Architecture are tightly coupled with each other resulting in multiple changes required if any other module changes.
    • One component fails resulting entire system being down: If the business layer or DataAccess layer fails in responding entire system is going to be down.
    • Large Codebase: as repetitive code increases to achieve required functionality, code becomes very large and becomes difficult to maintain.
    • Complex Code Deployment: for minor change the entire application needs to be redeployed.Service-oriented architectureTo overcome issues from Monolithic Architecture, SOA was the better choice. The main difference between SOA and Monolithic Architecture is SOA services run on separate process whereas application with Monolithic Architecture runs on multiple processes resulting in better scalability.The SOA oriented Enterprise Applications are designed as a collection of services. These services will be RESTFul Services or ASMX web service. Each service from SOA will have complete business functionality with required code and data integrations endpoints for example in eCommerce application OrderService will have features like placing orders, calculating order total considering product discount, reduce product available qty, process payment, etc.The service interface provides loose coupling so that they can be consumed by clients with no or little knowledge of how the integration is implemented inside service. The services are exposed using SOAP(Simple object Access protocol) OR HTTP OR JSON. Services can be published to intranet or IIS so that client can find / consume them.Advantages of SOA
    • Reusability: Services can be used by multiple clients. Order service can be used by the website or mobile app and also it can be consumed by back office application for fulfillment or reporting purpose.
    • Scalability: Services can be easily scaled up to handle growing users, sessions, transactions.
    • Upgradeability: As services are loosely coupled, it is easy to update service functionality and deploy code without affecting current client implementation.Disadvantages of SOA
    • Large upfront cost: Even though multiple teams can work on different services of same application simultaneously, its deployment is costly as you are deploying services as separate host.
    • Complex Service Management: In Enterprise level applications huge data gets exchanged between client and services in a millisecond, this requires very complex service management and high bandwidth servers. This also makes it difficult to use asynchronous communication..
    • Validation Overhead: whenever the service receives a request or sends a response to the client, validation of the input parameter happens that results in decreasing response time due to this overhead.
  3. Understanding microservice architecture – Microservice architecture is a trending software design pattern to develop a single application having a set of smaller services. This extends loosely coupled services that can be developed, deployed, and maintained independently. Each service is independent. Each service is responsible for the separate, individual tasks and can communicate with other services through simple APIs to achieve complex business functionality.In monolithic applications, we have a single assembly containing a user interface, business rule logic, and data access layer. Every component of an application like Order, Product, Payment will be part of the same assembly.monolithic architectureWith Microservices business components are segregated into individual services. Each individual is a smaller unit. There is no need to call Order service when Payment is processing or no need to use product service when inventory needs to update. All these services are independent of each other. Each service is responsible for its own execution.microservice architectureThe previous diagram shows how you can change an eCommerce application from a monolithic application to an application using microservices. Product Service, Payment Service, Order service are independent of each other, no service is aware of what others are doing. Each service can have a different technology stack depending on need. Here Products are stored on a cloud database, payment logs are stored on disk, and order data stored on-premises database. Also, each service can be hosted separately.
  4. Advantages of Microservices Architecture
    • IsolationAs each service is independent of other services, Microservices comes with great use of isolation of each service. If one service is failed only that particular functionality will fail to work however your application might still work. This also isolates component-specific errors. This way developers can build and deploy services without the need to stop the entire app or redeploy the entire application. So in theory there will be doing downtime for your application due to code changes. Each service will have their own CRUD operations.
    • ScalabilityAs an application based on microservices is a group of small components, it’s easier to scale up or scale down as per the requirement of a specific component. Independent functions can easily be extracted or moved for reuse in other services. As the workload grows with more data to process and increased user sessions additional microservices can easily deploy to handle specific component growth.
    • Smaller code baseEach service is small so it is easy to develop and deploy as a unit. This also helps to maintain and deploy code.
    • Faster Project developmentYour team specific to components can change component code, test it, and deploy without affecting other teams. In turn, development and deployment will be faster with the separation of services. .
    • Compatible With CI/CD, DevOps process and AgileApplications with Microservices Architecture are compatible With CI/CD and Agile and container methodologies. Each team can choose the process that works bests for the development and the deployment of application component. Tools like Docker and Kubernetes are used for this.
      Amazon, Netflix are successful usecase of Microservices Architecture. Following is a diagram of Netflix Microservices Architecture.Microservice Architecture Pattern for Architects
  5. Disadvantages of Microservices Architecture
    1. Microservices are more expensive: Microservices requires more resources as each service is isolated and requires its own CPU time and runtime environment. You will require more tools, servers and APIs. More services means more resources. Handling multiple database, file operations, transactions will be difficult.
    2. Communication between services is complex: As each service is an independent service, communication between multiple services becomes difficult. Eventually, remote calls in between services will increase resulting network latency. The better way to create a shared library to include all shared functionality throughout the application like users and their roles..
    3. Global testing is difficult: As the application is spread across multiple services, global testing is difficult especially for the components which are dependent on other services.
    4. Communication in Microservices – You need to be very careful while selecting communication and message patterns, working with Microservices. If this is incorrectly used, the purpose of the advantages of Microservices will be compromised. Each service from the application should be simple and lightweight.Choose synchronous or asynchronous calls wisely depending on if your client calls need to be blocked or not. Message formats like HTTP/s, JSON, XML, or Binary can be used.
  6. Internal Communication in Microservices – Microservices based application is distributed is a distributed system executing multiple processes and services. Each service instance is a process. Services must interact with other internal services using inter-process communication protocol like HTTP, AMQP, TCP. For example in the eCommerce application, the Order service needs to interact with the User or Authorization service to find the validity of the customer before placing an order.There are three different ways that you can use internal communication between services
    1. HTTP communication HTTP communication can be synchronous or asynchronous. Synchronous HTTP communication creates coupling between two services which we do not want. The services involved communicating directly with each other with requests and responses.A popular architectural style for request/response communication is REST. This approach is based on and tightly coupled to, the HTTP protocol, embracing HTTP verbs like getting, POST, and PUT. REST is the most commonly used architectural communication approach when creating services. You can implement REST services when you develop ASP.NET Core Web API services.
    2. Message communication With Message-based communication, services are not involved directly for communication. Services push messages to the services that have subscribed for messages. This removes the complexity of associated sending and receiving messages associated through HTTP communication.Services do not require to know how to talk with other services however every service should know about the service broker.
    3. Event-driven communication This is an asynchronous approach that removes the coupling between services. Message Broker is required where service will write its events to it. Consuming service does not need to know the details of the event as consuming service react to the triggering of the event not the message.
  7. SOA Vs Microservices If you do not have a good understanding of both architecture you might think SOA and Microservices Architecture sounds similar, in both architectures each service has a specific responsibility to perform, unlike monolithic architecture. However, both architectures differ in the following way.
    • Service Deployment and ScalabilityIn SOA each team member should be aware of common messaging patterns between services, as SOA uses Enterprise service bus for communication. ESB can be a reason for the failure of the entire application. Scaling specific service is not possible unless ESB is capable to handle it.In Microservices, each service is truly independent and services can be created and deployed independently unlike SOA. It is easier to develop a new version of service and deploy it without affecting others. Frequent deployment and scaling specific services is pretty much possible in Microservices.
    • Component Sharing Component Sharing is a main benefit of SOA. It has coupled components and its data as single unit with minimal dependencies. SOA relies on multiple requests and response through ESB to complete on business funtionality of application due to this systems built on SOA are slower than Microservice Architecture. Data storage shared among all services.Microservices Architecture minimize direct component sharing. Point to point communication is a standard way. Required shared functionality can be achieved using NuGet packages. Each service will have a independent data storage.
    • Heterogeneous interoperability Interoperability refers to the ability to consume services by clients developed in a different programming language than services. SOA promotes multiple heterogeneous protocols through its messaging middleware component. SOA uses AMQP, MSMQ, SOAP as primary remote access protocols.. Microservices architecture expects all clients should use the same remote access protocol as the service is using. MSA uses REST-based protocols, these are usually homogeneous.
    • MaintainableSOA is bigger in size and the components involved perform more than one function so the application code base becomes difficult to maintain. SOAs are implemented as a complete subsystem..Microservices are smaller components and each component is designed for only one purpose that makes it more maintainable than SOA. The prefix “micro” in microservices is referring to the granularity of each component.
  8. Prerequisites of microservice Deployment and QAThe new version of service from microservice demands quick development, testing, and deployment of service. This requires infrastructure to test the updated version and deploy it to the production server. Test case execution and deployment can be achieved using the CI / CD. The development team might push changes to UAT / staging for testing more frequently, so testing these changes you need a QA team.A monitoring frameworkAs the number of services and functionality of the entire application scales, you need a framework that can monitor the functionality and health of each system, cross-service communication and find any possible issues, and have an automated way to respond to these issues..

Credit :

AWS Fargate now supports Amazon ECS Windows containers

AWS Fargate now supports Amazon ECS Windows containers

Posted On: Oct 28, 2021

Today, AWS announces the availability of AWS Fargate for Amazon ECS Windows containers. This feature simplifies the adoption of modern container technology for Amazon ECS customers by making it even easier to run their Windows containers on AWS.

Customers running Windows applications spend a lot of time and effort securing and scaling virtual machines to run their containerized workloads. Provisioning and managing the infrastructure components and configurations can slow down the productivity of developer and infrastructure teams. With today’s launch, customers no longer need to set up automatic scaling groups or manage host instances for their application. In addition to providing task-level isolation, Fargate handles the necessary patching and updating to help provide a secure compute environment. Customers can reduce the time spent on operational efforts, and instead focus on delivering and developing innovative applications.

With Fargate, billing is at a per second granularity with a 15-minute minimum, and customers only pay for the amount of vCPU and memory resources their containerized application requests. Customers can also select a Compute Savings Plan, which allows them to save money in exchange for making a one- or three-year commitment to a consistent amount of compute usage. For additional details, visit the Fargate pricing page.

Fargate support for Amazon ECS Windows containers is available in all AWS Regions, excluding AWS China Regions and AWS GovCloud (US) Regions. It supports Windows Server 2019 Long-Term Servicing Channel (LTSC) release on Fargate Windows Platform Version 1.0.0 or later. Visit our public documentation and read our Running Windows Containers with Amazon ECS on AWS Fargate blog post to learn more about using this feature from API, AWS Command Line Interface (CLI)AWS SDKs, or the AWS Copilot CLI.

Responsible Operations: Data Science, Machine Learning, and AI in Libraries

Responsible Operations is intended to help chart library community engagement with data science, machine learning, and artificial intelligence (AI) and was developed in partnership with an advisory group and a landscape group comprised of more than 70 librarians and professionals from universities, libraries, museums, archives, and other organizations.

This research agenda presents an interdependent set of technical, organizational, and social challenges to be addressed en route to library operationalization of data science, machine learning, and AI.

Challenges are organized across seven areas of investigation:

  1. Committing to Responsible Operations
  2. Description and Discovery
  3. Shared Methods and Data
  4. Machine-Actionable Collections
  5. Workforce Development
  6. Data Science Services
  7. Sustaining Interprofessional and Interdisciplinary Collaboration

Organizations can use Responsible Operations to make a case for addressing challenges, and the recommendations provide an excellent starting place for discussion and action.
by Thomas Padilla

Padilla, Thomas. 2019. Responsible Operations: Data Science, Machine Learning, and AI in Libraries. Dublin, OH: OCLC Research.

Do we need more particularized data privacy rights for U.S. citizens?

In a first this year late June, California passed A.B. 375, the California Consumer Privacy Act of 2018, a sweeping piece of legislation that, on its face, grants California residents data privacy rights that have never before been granted in the United States.

california-data-privacy-lawsIn late June, California passed A.B. 375, the California Consumer Privacy Act of 2018, a sweeping piece of legislation that, on its face, grants California residents data privacy rights that have never before been granted in the United States.

The law was driven by recent privacy scandals and the political pressure of a potential privacy rights ballot initiative that advocates agreed to drop in lieu of the passage of A.B. 375. Even more than the practical implications of the law, its passage spurred additional public debate that could lead to federal data privacy legislation and more particularized data privacy rights for U.S. citizens.

Generally, A.B. 375 allows consumers (defined as natural persons who are California residents) to demand access to all of the personal information that a company has collected relating to them, along with a full list of third parties with which the company has shared that data. In addition, the law allows consumers to sue companies – including through class actions – if they violate its privacy guidelines.

The law applies to for-profit companies that collect consumers’ personal information, conduct business in California, and fall into one of three categories:

  1. Realize gross revenues in excess of $25 million.
  2. Receive or disclose the personal information of 50,000 or more consumers, households or devices annually, or
  3. Receive 50 percent or more of annual revenues from selling consumers’ personal information. Additional provisions bring corporate affiliates of these companies if they share branding.

A.B. 375 grants consumers four categories of privacy rights.

First, the right to know what personal information a business has collected about them, including the source of that information, what is being done with it, and with whom it is being shared.

Second, the right to “opt out” of a company being permitted to sell their personal information to third parties.

Third, the right to request the deletion of their personal information. And fourth, the right to not be discriminated against if they exercise their data privacy rights.

Interestingly, however, A.B. 375 opens the door to allowing companies to pay consumers for the right to share their data by permitting, under certain circumstances, the granting of a different price to a consumer related to the value of that consumer’s data.

For the purposes of this law, “personal information” is defined broadly, including any information that identifies, relates to, describes, or is capable of being associated with a particular consumer or household. But A.B. 375 does exclude information that is properly made available by federal, state, or local records provided that such information is used for a purpose compatible with the purpose for which it is maintained. A.B. 375 also carves out de-identified personal data (i.e., anonymized data) and aggregate data (both of which are narrowly defined).

The law does not come into effect until January 1, 2020, and numerous companies and lobbyists will be proposing amendments that could narrow its scope and impact. Companies that deal in consumer data – including retailers, internet service providers, and other web-based companies – will be working to scale back to privacy rights set forth in A.B. 375 based on the costly nature of compliance.

The state attorney general will also work with public stakeholders to develop a particularized compliance framework for impacted companies to work toward in the coming 16 months. But even a curtailed version of A.B. 375 is likely to require significant privacy policy changes for companies falling within its reach.

Perhaps most importantly, the passage of A.B. 375 coincides with increasing public and political acknowledgement of the need to better protect personal data. The week before it was signed into law, the Supreme Court issued its decision in Carpenter v. United States, 585 U.S. ___ (2018), holding (in a Fourth Amendment context) that an individual has a reasonable expectation of privacy in his geolocation data, despite that data being collected and held by cell phone companies.

Since June, many federal lawmakers ramped up efforts to draft and pass data privacy bills that address the manners in which companies collect, maintain, and use personal information. See

For now, companies impacted by A.B. 375 should be crafting draft privacy policies and procedures that would allow them to comply with the current iteration of the law. At the same time, they should follow proposed amendments to the law, raise issues with the California legislature if they unearth cost or logistical difficulties in their early compliance efforts, and keep an eye on Congress’ efforts on the same topic.


Courtesy: John C. Eustice

John C. Eustice is a member at the law firm Miller & Chevalier, chartered in Washington, D.C.

Are you making the most of your mainframe data?

Mainframe data is big data!

Data Quality Matters

When most people think of legacy software, we think of software that is outdated and due for replacement.

Yet, an alternative definition of legacy, particularly when it comes to mainframe application, is, simply, software that works.

This is a definition that our partner, Syncsort, is proud of. The legacy DMX Sort product has been helping customers to reduce the cost of running their mainframe for decades.

This legacy – the understanding of how to optimally move vast amounts of data – is brought to Syncsort’s line of data integration tools – particularly for moving both logs and data from the IBM mainframe and the IBM i series to advanced analytics platforms like Hadoop and Splunk.

These data integration and change data capture solutions are complemented by the data quality stack, meaning that we don’t just move data efficiently, we ensure its quality as well.

Mainframe data is big data

View original post 155 more words

5 Product Data Levels to Consider

Different kinds of product data may be divided into the schemas. Product pricing is usually a subject mainly belonging to the ERP side of things. But how to connect the dots and take things to next level, this write-up throws light on Product Master data Management.

Liliendahl on Data Quality

When talking about Product Master Data Management (Product MDM) Product Information Management (PIM) I like to divide the different kinds of product data into the schema below:

Five levelsLevel 1, Basic Data

At the first level, we find the basic product data that typically is the minimum required for creating a product in any system of record.

Here we find the primary product identification number or code that is the internal key to all other product data structures and transactions related to the product within an organization.

Then there usually is a short product description. This description helps internal employees identifying a product and distinguishing that product from other products. Most often the product is named in the official language of the company.

If an upstream trading partner produces the product, we may find the identification of that supplier here too. If the product is part of internal production, we may…

View original post 872 more words

Data-centric approach to enterprise architecture

Data is the key to taking a measured approach to change, rather than a simple, imprudent reaction to an internal or external stimulus. But it’s not that simple to uncover the right insights in real time, and how your technology is built can have a very real impact on data discovery. Data architecture and enterprise architecture are linked in responding to change, while limiting unintended consequences. DBTA recently held a webcast featuring Donald Soulsby, vice president of Architecture Strategies at Sandhill Consultants, and Jeffrey Giles, principal architect at Sandhill Consultants, who discussed a data-centric approach to enterprise architecture. Sandhill Consultants is a group of people, products and processes that help clients build comprehensive data architectures resulting from a persistent data management process founded on a robust Data governance practice, producing trusted, reliable, data, according to Soulsby and Giles. A good architecture for data solutions includes: RISK MANAGEMENT Strategic Regulatory Media Consumer COMPLIANCE Statutory Supervising Body Watchdog Commercial Value Chain Professional Enterprise architecture frameworks start with risk management as its building blocks, Soulsby and Giles said. A typical model asks what, how, where, when, and who. A unified architectural approach asks what, how, where, when, who and why. This type of solution is offered by Erwin and is called Enterprise Architecture Prime 6. According to Soulsby and Giles, the platform can achieve compliance, either regulatory or value chain; can limit unintended consequences; and has risk management for classification, valuation, detection and mitigation. erwin and Sandhill Consultants offerings will provide a holistic view to governing architectures from an enterprise perspective. This set of solutions provides a strong Data Foundation across the Enterprise to understand the Impact of Change and to reduce Risk and achieve Compliance, Solusby and Giles said. An archived on-demand replay of this webinar is available here.

via The Building Blocks of Great Enterprise Architecture for Uncovering Data — Architectural CAD Drawings

Data Architecture in a digital world; empowering the Data Driven Enterprise

To be able to be a really Data Driven, an organisation performs a Data Management discussion throughout the whole organisation.

Source: Data Architecture in a digital world; empowering the Data Driven Enterprise

Automating Enterprise Architecture

Modern EA processes must involve more stakeholders in the EA process so that EAs themselves aren’t the ones actually doing each and every task. The combination of smart tooling and collaborative process is really the key to success in automating your enterprise architecture practice.

%d bloggers like this: