Blog

Berlin Open Infrastructure Summit 2018 Redefines Cloud

by Computing Stack

Posted on Nov 21, 2018 at 11:00 AM


Open Infrastructure Berlin summit held last week has brought the cool projects availability announcements, such as of Airship, Kata container, StarlingX for edge computing and Zuul for CI/CD. And there are also good words that are defining what a modern cloud is. Virtual Machine? Software Defined Virtual Network and Storage? Not any more.


Before diving into any details, let's recap Berlin Summit. The major subjects Berlin summit was focused on:

  • CI/CD
  • Container Infrastructure
  • Edge Computing
  • HPC/GPU/AI
  • Private/Hybrid Cloud
  • Public Cloud
  • Telecom NFV
Also we luckily got the answer of the release name after Stein: Train, and a beautiful story of Denver PTG meeting near the train station. So what a positive story from noise into the amuseable OpenStack release! That makes me wonder if some noise should be made in next Beijing PTG from a university so to make a "U" :). Besides that OpenStack Summit will go to China in Q4 of 2019. Key notes from Huawei, Redhat, Volkwagon and other companies described a full stack cloud ready for Telecom, emphasis on workload categories over the cloud, as well a distributed and intelligent manufacturing cloud benefiting to the business.

ComputingStack concurs with the workload categories which are aligned with our past/ongoing efforts. By the way we in ComputingStack have been "coincidentally" thinking the same way to describe what IntOS based cloud can do for customers. ComputingStack IntOS recent cloud evolution aims to well meet all workloads of today cloud industry:
  • Virtual Machine
  • Storage: Object, Volume
  • Container
  • AI
  • BIG DATA
  • Direct hardware access as bare metal
In those categories, Virtual Machine and storage of course will continue as a major root level workloads. Container or cloud native application over Kubernetes has been fundamentally change a traditional cloud definition since a couple years ago. AI and Big data today as a used-to-be second level workload over Virtual Machine is changing. The way over Virtual machine as second level workload will not be able to meet expectation from cloud experience any more. More and more customers are expecting to deliver complete API and user experience of Machine Learning. On the hardware direct hardware access to cloud user is becoming common. The reason is multi-folded, such as NFV requires DPDK, Physical Network awareness, as well AI workload requries acceleration by hardware. Low latency requirement of 5G in the spectrum of IoT/Edge comupting can be translated to certain dependencies on hardware. So customized hardware will become not customized to be a standard. All the trend is leading increasing bare-metal service in the cloud. ComputingStack has seen tremendous value of Ironic and CPU/GPU/FPGA etc and has been investing on to have those at the standard part of today's cloud. We believe that industry will see even more confirmed way of defining modern cloud in 2019/2020. One immediate impact would be the angle of viewing a cloud. So workload will be the key. ComputingStack is seeing a business transition that a private cloud in coming year will have a new de facto standard by by how this cloud onboards those workload, at root level or a secondary, no matter private cloud or public cloud. To translate into technical requirements, we see plain requirements of how container service is being part of your cloud service, what ready API and user experience to serve workload of AI/Data, and as well what hardware capacity exposed to your customer.

Stay tuned for IntOS AI cloud and hear how acceleration is helping some of our customers. And ComputingStack is expected to talk some our best practices of running AI and blockchain in IntOS Cloud in the Summit Q4, 2019, in China.



Choosing the Right OpenStack Distribution

by Computing Stack       

Posted on June 4, 2017 at 11:00 AM


If there are two words to describe OpenStack, besides innovative the second would be complexity. The complexity is mainly caused by hundreds of components(or thousand), of which most are widely open. That's also the reason many people complain and are not positive with the future of it. Certainly distribution is the way to tackle, and has become very critical step for whether fully fledged benefits OpenStack can be brought to business.

Which distro to use will lead to a distinct OPEX. A blog by CloudOps.ca caught the eyeball by comparing a few distributions on market, including Redhat, Canonical, and Mirantis. As an OpenStack Packaging provider, ComputingStack agrees with most, however we still see something differently of us from those. A quick link to the post is seen at: https://www.cloudops.com/2016/09/choosing-the-right-openstack-distribution/ In this blog, we will dive deeper into: IntOS OpenStack by ComputingStack, Canonical OpenStack, Red Hat OpenStack, Mirantis OpenStack, WindRiver Titanium, EasyStack from China, Huawei FusionSphere OpenStack.

Distro Brief Strengths Comparisons
IntOS by ComputingStack Ansible and self-maintained python codes repo, with all enterprise cloud features Full controlled, built from python codes; Agility to meet various requirements ; Easy to use, requires no extra provision hosts beside a basic linux console
Canonical OpenStack Apt, juju, MaaS, Autopilot Very developer oriented and less operable. However Canonical provides Managed Service to make it easy
Red Hat RHEL OpenStack Platform Focused on OpenShift, RHEL, Ceph etc; multiple OpenSource community version and commercially supported one RDO OpenSource is good for research and PoC, but not for production
RDO Packstack backed by Red Hat An open source community version RDO OpenSource is good for research and PoC, but not for production
MOS by Mirantis Fuel The solution appeals to super large enterprise, and not recommended for mid-size private Data Center, as it adds an complexity comparable to OpenStack itself over OpenStack to use.
EasyStack in China Not open script Not real distro concept, but self-maintained scripts
Huawei FusionSphere Not open script More targeted to Carrier Market, and appeals to large scale telco
Rack Space OpenStack Ansible Provisioning

How IntOS OpenStack came into being

by Computing Stack        

Posted on Dec 1, 2016 at 11:00 AM


After doubts and skepticisms, OpenStack becomes a de facto DC solutions today, and becomes the core of a modern data centre. Along the journey there are many successful stories, however still many the other way around. This is same regardless to many IT giants like HP, CISCO, or startup. So what really matters in OpenStack arena? The author fortunately got the chance to have experienced from day one when OpenStack announced its open source policy in 2011, and a few high profile openstack projects in the market afterwards. Based on those, we became aware that for success of each project, provisioning and packaging have become the major concern. There are multiple reasons on this drives to come up IntOS:


Three Corner Stones of public cloud

by Computing Stack        

Posted on Set 26, 2016 at 11:00 AM


In many cases, the trade-offs of private cloud vs public cloud are a major subject for small or large business when deciding a way. What we believe is that there is no standard answer, and however ComputingStack believes 3 corner stones for a decent public cloud to benefit users:

  • Disaster Recovery
  • Scalability
  • Availability
Data loss will be considered a catastrophe disaster, not even technically, but also financially. To some degree to secure data is also one legal responsibility. Data loss is never a forgiveable error. That is why storage, replication and disaster recoverability have been one major effort within computingstack.

Public Cloud features diversified customers and unpredictable workloads. As a service provider, public cloud should be committed to provide unstopped service without negotiation. A maintenance window should never be assumed once it starts. Whereas this is possible to private cloud. On other hand, in order to have a sustainable business model, usually the upfront investment is relatively low to get business started. However, the expandability should be considered from day one at any cost. This is in terms of storage, networking and compute.

High Available service is one major differentiator from private service as public cloud is more dedicated so that a shared facility with better availability feature is possible. In any circumstance, high availability mean high cost. In shared customer environment, the cost will go down. ComputingStack bears in mind all those availability requirements in our implementations.


How OpenStack is mature today?

by ComputingStack       

Posted on Sep 25, 2016 at 10:45 PM


The maturity of Open Source Software is linear to the size of user community. Since my first OpenStack project to today, OpenStack has become considerably mature. However there are still some components for whitch it's still too early to consider production. In author's experiences of research and implementation, three tiered maturities:

Tier 1 is the ealiest release of compute, storage and networking. It's highly recommended to adopt software defined storage, like Ceph, Cinder, and Swift, OpenStack Neutron networking and L2/L3 artifacts such as router, VLAN/VXLAN, OpenVswtich which has become reliable and consistent after many years iterations. The compute, such as KVM integration has become the top option when virtualization has to be considered.

Tiered 2 is the auxiliary services, such as Heat and Celiometer. As tools, those becomes much adoptable with minimal degree of bug. However bear in mind, for the function achieved by them can be easiy done by Api or other in-house development.

Tiered 3 including Trove database and Sahara big data service, and Magnum. For services in this tier, they feature the overlaid functions over tenant VM using API service of OpenStack. Based on current implementation, it's too early to adopt those OpenStack APIs. There is no reliable and decent installation artifact yet to timely fix bug and get them running properly. Firewall and VPNaaS today there are quite some mature alternatives to install and run. However OpenStack API is still not ready yet for production.


When customer pays US$42K for a VM with m3.xlarge

by ComputingStack       

Posted on Sep 24, 2016 at 10:45 PM


Quite a while ago, I got to know for a few services from AWS of which the price can go as high as US$50K per year per VM for a quite a while ago. One example is db.r3.8xlarge RDS by Microsfot SQL Server with memory optimized, which is charged at US$9.970H, by year about US$100K. When I first saw this price, I was just thinking perhaps no business would actually buy, rather AWS provided a high-end brand concept to the customers. However a recent story has taught me a good lesson when we had to pay regardless the price. However to compare shortly before when private data center was used with a SQL Server cluster establishment, the price perhaps account for 1/10 in total for the total OPEX+CAPEX. To explain why it matters, this price cuts almost half profit

Then then why had we to pay now?

The answer is simple that the budget plan all were based on plain VM (which is cheaper), or on other low end VM. All customer facilities are established, and people's skill sets and mindset all have been transformed to AWS. However to some point infrastructure needs be expended. It comes to a situation that to buy this pricey service is the only option.

Does this sound familiar to 'Vendor Lock In'?