Your comments

Hey Amber,


I think you might need to simply add different Variable labels. (I made a copy of your episode 1 and tried all this out there in my temp Ep3 copy.)


I added the new Variable label "spoken2" under the Holding Up section, and added "spoken3" under the Where From section:


"Holding Up" section:



"Where From" Section:



This way, each Variable has it's own "trigger." So when you check that all three are there, it will look for the three separate ones.



I did not touch your Ep1. So, please check out my Ep3 and see if it works as you want. If it does, just make the changes to your Ep1 and delete my Ep3. :-)


Hope it helps!!!


The text length delay shows up on the Messaging Platforms, not the Simulator. :-( Here are a few suggestions as you fine tune your bot timing:


1. Each Messaging platform will end up showing your story with a slightly different timing. So annoying, right? But it depends on platform bandwidth, speed of connection, individual phones, cosmic rays, etc. It's not terribly different timing-wise, but it's unpredictable enough to suggest the following...


2. You don't need to put your own "reading" time break in-between every one of your lines. Trust that it will flow OK without them. (Really short delays of 1-2 seconds will tend to not even be noticed.) Know that if you add your own 13-second delay in the story, that means your reader will experience the built-in platform reading time delay AND your own 13-second delay. The delay might be a bit too long when added together.


3. In my stories, I use the time delays only when I want a "scene break" or a notable pause. It's up to you, of course, how long you want your reader to wait. Just remember that too long of a time delay will feel, well, really long to your reader.


4. Really long blocks of text will not show up well on the Messaging platforms. They will tend to scroll up "in a big chunk" and might make it tough for your reader to follow the story. They might have to stop and scroll back up to read the entire block. I've been breaking up my long "paragraphs" into much shorter individual messages.


Now by doing this, you risk sending a ton of consecutive messages (buzz, buzz, buzz...) so that can be very annoying also, therefore...


5. Try to allow your reader to have a bit of "control" over the story flow. By inserting a Player Choice, or having the Player Choice simply say "[More]", you will allow your reader to "click to keep going." In my stories, I tend to not go beyond 3-4 story messages before I give the reader a Player Choice input (of some kind.)


These are just suggestions, of course! :-) Publish your bot up and then do an editing pass once you see it actually running on your chosen platform. Hope this helps!


If you're interested, there are bot Stories that you can read from the "Store" of the Main Sequel webpage:

https://store.onsequel.com/#/

Please check the version of Chrome you are using. The latest version is Version 51.0.2704.103


https://support.google.com/chrome/answer/95414?co=GENIE.Platform%3DDesktop&hl=en

HI Alejandro, You probably need to check your version of Chrome. The current chrome version live is Version 51.0.2704.103 Please try updating your Chrome, and let me know if doesn't help out.

Hi Anna, Image nodes are expressed as structured templates in Facebook.

Hi Vincent. No, there isn't a way to do an automated "carriage return" or create an automatic "next" message box.


One thing to keep in mind is if you have a ton of content coming to the reader/user all at once, it will be difficult to keep up with the message flow. It will scroll up the screen quite quickly. You might want to display things in smaller chunks for better readability. (This all depends on what you are presenting, of course.)

You are correct. SHARE Nodes are not yet available on Telegram. We're on it! Thanks!

Hi Vincent. Thanks for your note!


The API reply coming for your request is:


{
"error_message": "This service requires an API key.",
"html_attributions": [],
"results": [],
"status": "REQUEST_DENIED"
}

For this particular request, i.e Google maps API, you'll need to provide an API key to use their API. You'll need to consult the Google API documentation as to where exactly to find this API key.


As for the 200 OK coming back as a response, it's due to how Google's API has implemented this particular error scenario. Ideally, Google should not have returned a 200 for an error result.


Please also check the "Open Full Response" link in the popup and see what response is returned by the API. By looking at this response you can see the response returned in your request.

Also, there is one error in the request parameters where the request parameter was starting with "?". We will add a check in the authoring tool so that the character "?" is not accepted in GET requests.

Hope this helps! Thanks again!

Thanks, Pavel. We're constantly working to improve our NLP AI, and this is a great case for us to do just that.


Currently, in the case where we find an equal semantic match between two separate keyword branches, we simply choose one branch to go down. For example; "Day" semantically equates to "Week," so if you have those two Keywords: "Day and "Week" and type in "Every day," it might match up to "week." We're going to give more weight to an exact string match than to a semantic match of a keyword. This will be improved and updated asap.


Thanks again for bringing it to our attention!