OpenShift Partner Reference Architectures
Red Hat’s Partners play a key role in developing customer relationships, understanding customer needs, and providing comprehensive joint solutions. As customers use Red Hat technologies to help solve increasingly complex business issues, partners provide reliable guidance, technical information, and even engineered integrations to assist customers in making sound technology decisions.
For this post, the focus is on partners that are helping to showcase their technology paired with the OpenShift platform. Whether this is technology from our system vendor partners, independent software vendors (ISVs), or cloud service providers, we are including a library of reference architectures here. Reference Architectures combine partner technology with Red Hat technology to formulate a best-practices design and to simplify the process for creating a stable, highly-available, and repeatable environment on which to run your applications on OpenShift.
Using sidecars to analyze and debug network traffic in OpenShift and Kubernetes pods
In the world of distributed computing, containers, and microservices, a lot of the interactions and communication between services is done via RESTful APIs. While developing these APIs and interactions between services, I often have the need to debug the communication between services, especially when things don’t seem to work as expected.
Before the world of containers, I would simply deploy my services on my local machine, start up Wireshark, execute my tests, and analyze the HTTP communication between my services. This for me has always been an easy and effective way to quickly analyze communication problems in my software. However, this method of debugging does not work well in a containerized world.
Kubernetes Warms Up to IPv6
There’s a finite number of public IPv4 addresses and the IPv6 address space was specified to solve this problem some 20 years ago, long before Kubernetes was conceived of. But because it was originally developed inside Google and it’s only relatively recently that cloud services like Google and AWS have started to support IPv6 at all, Kubernetes started out with only IPv4 support.
That’s a problem for organizations that are already committed to using IPv6, perhaps for IoT devices where there are simply too many IP addresses required. “IoT customers have devices and edge devices deployed everywhere using IPv6,” notes Khaled (Kal) Henidak, Microsoft principal software engineer who works on container services for Azure and co-ordinates Microsoft’s upstream contributions to Kubernetes.
Technical Deep-Dive of Container Runtimes
As you might have already seen, SUSE CaaS Platform will soon support CRI-O as a container runtime. In this blog, I will dig into what a container runtime is and how CRI-o differentiates architecturally from Docker. I’ll also dig into how the Container Runtime Interface (CRI) and the two Open Container Initiative (OCI) specs are used to promote stability in the container ecosystem.
SUSE at “The City of Lights” for HPE Technology and Solutions Summit
Transformation and Future Trends at SUSECON 2019
↧
Servers: Red Hat, Kubernetes and SUSE
↧