Protect Your Time and Money from the Data Explosion
As today’s data explosion continues, there’s really no question that for the IT manager or anyone in a similar role, software-defined storage (SDS) is going to be a worthwhile investment. With it, your organization can scale your storage to match your rapidly ballooning data, while still keeping tight control of your costs and your maintenance burden.
Without SDS, you’re going to need a budget that grows for every extra gigabyte your organization wants to store. Unless you work at an organization with a rapidly expanding IT budget—yeah, we thought not—SDS is the only realistic way to tackle the situation. In the long run (and maybe even quite quickly), SDS will save you two things: time and money.
Istio: A New Routing Tier for Cloud Foundry
When an app is pushed in Cloud Foundry, cloud controller creates identifiers for the app plus some routing metadata (DesiredLRP + routing metadata) and then forwards those to Diego. At this point, Diego is scheduling the application by trying to find a home for it in any of the available running containers. Once the application is up and running, Diego’s BBS API notifies route-emitters about this app. Route-emitters then forwards the routes along with the IP and Port for that application container to two places. The first is NATS which is used by gorouter to receive it’s route updates and the second is Routing-API which provides routes to TCP router. Gorouter and TCP router then update their routing tables using all the updates that they received from NATS and Routing-API respectively.
SUSE Manager’s missing locking feature, and how it’s not missing at all
Earlier this month, a colleague from France asked why SUSE Manager doesn’t offer a system locking feature when you choose Salt as the client stack.
This feature is still available if you’re using the traditional SUSE Manager client stack. It allows you to lock the system and prevent any changes like installing or removing packages until the system is unlocked again.Deploying SLURM PAM modules on SLE compute nodes
The high CPU-GPU and memory density of modern HPC compute nodes provide sufficient resources for concurrent distributed workloads. Workloads on a compute node will usually belong to different users, and those workloads are understandably important to their respective owners. Moreover, research workloads may have normal runtimes measured in seconds, weeks or even months. If a user were to access that node and initiate work or processes, not managed by the cluster scheduler or resource management facilities, and cause the node to crash that would certainly not be fair.
↧
SUSE: SDS, Istio, SUSE Manager and More
↧