Blog

Choosing the Right OpenStack Distribution

by Computing Stack

Posted on June 4, 2017 at 11:00 AM


If there are two words to describe OpenStack, besides innovative the second would be complexity. The complexity is mainly caused by hundreds of components(or thousand), of which most are widely open. That's also the reason many people complain and are not positive with the future of it. Certainly distribution is the way to tackle, and has become very critical step for whether fully fledged benefits OpenStack can be brought to business.

Read More

How intOS OpenStack came into being

by Computing Stack

Posted on Dec 1, 2016 at 11:00 AM


After doubts and skepticisms, OpenStack becomes a de facto DC solutions today, and becomes the core of a modern data centre. Along the journey there are many successful stories, however still many the other way around. This is same regardless to many IT giants like HP, CISCO, or startup. So what really matters in OpenStack arena? The author fortunately got the chance to have experienced from day one when OpenStack announced its open source policy in 2011, and a few high profile openstack projects in the market afterwards. Based on those, we became aware that for success of each project, provisioning and packaging have become the major concern. There are multiple reasons on this drives to come up IntOS:

Read More

Three Corner Stones of public cloud

by Computing Stack

Posted on Set 26, 2016 at 11:00 AM


In many cases, the trade-offs of private cloud vs public cloud are a major subject for small or large business when deciding a way. What we believe is that there is no standard answer, and however ComputingStack believes 3 corner stones for a decent public cloud to benefit users:

  • Disaster Recovery
  • Scalability
  • Availability
Data loss will be considered a catastrophe disaster, not even technically, but also financially. To some degree to secure data is also one legal responsibility. Data loss is never a forgiveable error. That is why storage, replication and disaster recoverability have been one major effort within computingstack.

Public Cloud features diversified customers and unpredictable workloads. As a service provider, public cloud should be committed to provide unstopped service without negotiation. A maintenance window should never be assumed once it starts. Whereas this is possible to private cloud. On other hand, in order to have a sustainable business model, usually the upfront investment is relatively low to get business started. However, the expandability should be considered from day one at any cost. This is in terms of storage, networking and compute.

High Available service is one major differentiator from private service as public cloud is more dedicated so that a shared facility with better availability feature is possible. In any circumstance, high availability mean high cost. In shared customer environment, the cost will go down. ComputingStack bears in mind all those availability requirements in our implementations.


How OpenStack is mature today?

by ComputingStack

Posted on Sep 25, 2016 at 10:45 PM


The maturity of Open Source Software is linear to the size of user community. Since my first OpenStack project to today, OpenStack has become considerably mature. However there are still some components for whitch it's still too early to consider production. In author's experiences of research and implementation, three tiered maturities:

Tier 1 is the ealiest release of compute, storage and networking. It's highly recommended to adopt software defined storage, like Ceph, Cinder, and Swift, OpenStack Neutron networking and L2/L3 artifacts such as router, VLAN/VXLAN, OpenVswtich which has become reliable and consistent after many years iterations. The compute, such as KVM integration has become the top option when virtualization has to be considered.

Tiered 2 is the auxiliary services, such as Heat and Celiometer. As tools, those becomes much adoptable with minimal degree of bug. However bear in mind, for the function achieved by them can be easiy done by Api or other in-house development.

Tiered 3 including Trove database and Sahara big data service, and Magnum. For services in this tier, they feature the overlaid functions over tenant VM using API service of OpenStack. Based on current implementation, it's too early to adopt those OpenStack APIs. There is no reliable and decent installation artifact yet to timely fix bug and get them running properly. Firewall and VPNaaS today there are quite some mature alternatives to install and run. However OpenStack API is still not ready yet for production.


When customer pays US$42K for a VM with m3.xlarge

by ComputingStack

Posted on Sep 24, 2016 at 10:45 PM


Quite a while ago, I got to know for a few services from AWS of which the price can go as high as US$50K per year per VM for a quite a while ago. One example is db.r3.8xlarge RDS by Microsfot SQL Server with memory optimized, which is charged at US$9.970H, by year about US$100K. When I first saw this price, I was just thinking perhaps no business would actually buy, rather AWS provided a high-end brand concept to the customers. However a recent story has taught me a good lesson when we had to pay regardless the price. However to compare shortly before when private data center was used with a SQL Server cluster establishment, the price perhaps account for 1/10 in total for the total OPEX+CAPEX. To explain why it matters, this price cuts almost half profit

Then then why had we to pay now?

The answer is simple that the budget plan all were based on plain VM (which is cheaper), or on other low end VM. All customer facilities are established, and people's skill sets and mindset all have been transformed to AWS. However to some point infrastructure needs be expended. It comes to a situation that to buy this pricey service is the only option.

Does this sound familiar to 'Vendor Lock In'?