Posted on Nov 21, 2018 at 11:00 AM
Open Infrastructure Berlin summit held last week has brought the cool projects availability announcements, such as of Airship,
Kata container, StarlingX for edge computing and Zuul for CI/CD.
And there are also good words that are defining what a modern cloud is. Virtual Machine? Software Defined Virtual Network and Storage? Not any more.
Before diving into any details, let's recap Berlin Summit. The major subjects Berlin summit was focused on:
Posted on June 4, 2017 at 11:00 AM
If there are two words to describe OpenStack, besides innovative the second would be complexity.
The complexity is mainly caused by hundreds of components(or thousand), of which most are widely open. That's also the reason many people complain and are not positive with the future of it.
Certainly distribution is the way to tackle, and has become very critical step for whether fully fledged benefits OpenStack can be brought to business.
Which distro to use will lead to a distinct OPEX. A blog by CloudOps.ca caught the eyeball by comparing a few distributions on market, including Redhat, Canonical, and Mirantis. As an OpenStack Packaging provider, ComputingStack agrees with most, however we still see something differently of us from those. A quick link to the post is seen at: https://www.cloudops.com/2016/09/choosing-the-right-openstack-distribution/ In this blog, we will dive deeper into: IntOS OpenStack by ComputingStack, Canonical OpenStack, Red Hat OpenStack, Mirantis OpenStack, WindRiver Titanium, EasyStack from China, Huawei FusionSphere OpenStack.
|IntOS by ComputingStack||Ansible and self-maintained python codes repo, with all enterprise cloud features||Full controlled, built from python codes; Agility to meet various requirements ; Easy to use, requires no extra provision hosts beside a basic linux console|
|Canonical OpenStack||Apt, juju, MaaS, Autopilot||Very developer oriented and less operable. However Canonical provides Managed Service to make it easy|
|Red Hat RHEL OpenStack Platform||Focused on OpenShift, RHEL, Ceph etc; multiple OpenSource community version and commercially supported one||RDO OpenSource is good for research and PoC, but not for production|
|RDO Packstack backed by Red Hat||An open source community version||RDO OpenSource is good for research and PoC, but not for production|
|MOS by Mirantis||Fuel||The solution appeals to super large enterprise, and not recommended for mid-size private Data Center, as it adds an complexity comparable to OpenStack itself over OpenStack to use.|
|EasyStack in China||Not open script||Not real distro concept, but self-maintained scripts|
|Huawei FusionSphere||Not open script||More targeted to Carrier Market, and appeals to large scale telco|
|Rack Space OpenStack||Ansible Provisioning|
Posted on Dec 1, 2016 at 11:00 AM
After doubts and skepticisms, OpenStack becomes a de facto DC solutions today, and becomes the core of a modern data centre. Along the journey there are many successful stories, however still many the other way around. This is same regardless to
many IT giants like HP, CISCO, or startup. So what really matters in OpenStack arena?
The author fortunately got the chance to have experienced from day one when OpenStack announced its open source policy in 2011, and a few high profile openstack projects in the market afterwards. Based on those,
we became aware that for success of each project, provisioning and packaging have become the major concern. There are multiple reasons on this drives to come up IntOS:
Posted on Set 26, 2016 at 11:00 AM
In many cases, the trade-offs of private cloud vs public cloud are a major subject for small or large business when deciding a way. What we believe is that there is no standard answer, and however ComputingStack believes 3 corner stones
for a decent public cloud to benefit users:
Posted on Sep 25, 2016 at 10:45 PM
The maturity of Open Source Software is linear to the size of user community. Since my first OpenStack project to today, OpenStack has become considerably mature. However there are still some components for whitch it's still
too early to consider production. In author's experiences of research and implementation, three tiered maturities:
Tier 1 is the ealiest release of compute, storage and networking. It's highly recommended to adopt software defined storage, like Ceph, Cinder, and Swift, OpenStack Neutron networking and L2/L3 artifacts such as router, VLAN/VXLAN, OpenVswtich which has become reliable and consistent after many years iterations. The compute, such as KVM integration has become the top option when virtualization has to be considered.
Tiered 2 is the auxiliary services, such as Heat and Celiometer. As tools, those becomes much adoptable with minimal degree of bug. However bear in mind, for the function achieved by them can be easiy done by Api or other in-house development.
Tiered 3 including Trove database and Sahara big data service, and Magnum. For services in this tier, they feature the overlaid functions over tenant VM using API service of OpenStack. Based on current implementation, it's too early to adopt those OpenStack APIs. There is no reliable and decent installation artifact yet to timely fix bug and get them running properly. Firewall and VPNaaS today there are quite some mature alternatives to install and run. However OpenStack API is still not ready yet for production.
Posted on Sep 24, 2016 at 10:45 PM
Quite a while ago, I got to know for a few services from AWS of which the price can go as high as US$50K per year per VM for a quite a while ago. One example is db.r3.8xlarge RDS by Microsfot SQL Server with memory optimized, which is charged
at US$9.970H, by year about US$100K. When I first saw this price, I was just thinking perhaps no business would actually buy, rather AWS provided a high-end brand concept to the customers. However a recent story has taught me a good lesson when
we had to pay regardless the price.
However to compare shortly before when private data center was used with a SQL Server cluster establishment, the price perhaps account for 1/10 in total for the total OPEX+CAPEX. To explain why it matters, this price cuts almost half profit
Then then why had we to pay now?
The answer is simple that the budget plan all were based on plain VM (which is cheaper), or on other low end VM. All customer facilities are established, and people's skill sets and mindset all have been transformed to AWS. However to some point infrastructure needs be expended. It comes to a situation that to buy this pricey service is the only option.
Does this sound familiar to 'Vendor Lock In'?