Conversational models are a hot topic in artificial intelligence research. It can be found in a variety of settings, including online customer service applications and online helpdesks. This conversational agent (Chabot) is powered by retrieval-based models, which reply queries based on predefined responses. Questions and answers were provided and relational database queries are being used. The models are sufficient, however, they are not robust enough for more general use-cases. Recently, the deep learning (Recurrent Neural Network) boom has allowed for powerful generative models which mark a large step towards multi-domain generative conversational models. Despite these, RNN also has its own weakness when applied to dialog systems: the generated sentence tends to be short, universal, and meaningless or we call it “low substance”. This is probably because chatbot-like dialogs are highly diversified and a query may not convey sufficient information for the reply. The Recurrent Embedding Dialog Policy (REDP) is a model which designed to handle uncooperative behaviour of the user. This model is not rich of data compared to the first two models. Because of these problems in each model, this results to the inconsistency of the chatbot. Thus, in this study the author proposed a model ensemble of Retrieval, Generative, and Recurrent Embedding –Based Dialog System called “RGR-Based Dialog System”. Initial result of the study shows that feeding the retrieved candidate to generative model can resolve the “low substance” problem and combining the three (3) models can resolved the inconsistency of the conversational AI.