Cognitive ORE: Logo Outsider Enter



Details: Past




Summary: GAMMA (current)




Most of the gamma is about actually building out a production system to transact robotics endpoint-control types. This is a complex yet transparent infrastructure system that runs on Microsoft Azure at this time. Many of the requried automation technologies to support activity-based record management has been implemented and if you would like a demo, you can contact us to set one up. The system you're on now is the current implementation and the user portal is roughly 6 months behind the dev branch. The dev branch is gradually being integrated into the productino system, including many social features with a new twist. If you have a key, you'll also be able to see some of the features we're drumming up inside. Much of the beta was about getting a system onto rails to make things happen, but by the end of the gamma - a production system will exist.


Server Summary
- Trippled the available disk space
- Doubled the available disk ops/io
- Implemeneted additional security modules at both the network and runtime levels
- Implemented a production database for the core system
- Implemented production automation and management systems to help moderate the server
- Implemented a production integration methodology to rapid-scale and prototype new features into general availability.



Summary: Beta-> 02/06/2021


Server Summary
- Doubled the available RAM
- Doubled the available CPU core-count
- Trippled the available disk ops/io
- Implemeneted security modules at both the network and runtime levels



Details about the Beta

2020 was a hard year for many people and this project continued to be adjusted to meet the needs of where the world is heading today. When we started out, we wanted to have a novel meta-link engine that was capable of performing I/O behavior in a variety of use cases - such as endpoint management, data hosting, and more.


In the Beta the primary focus was taming the unbound nature of a centralized server that managed decentralized access points. This process was slowed down so that enough valid information could be gathered and the appropriate main management protocols could be built to manage a scaled application infrastructure. Although the Alpha dealt with purely conceptual information, the Beta has been an active trial with a live network that's unthrottled. Many important pieces of data were derived during this process and many changes were made that helps us better manage the server, security, compliance, cost basis, and more moving forward.


We've been working on making sure various default systems are in place to support this site at scale over the last year. The site is now owned by CORE.HOST, LLC - a Delaware Information Technology company. This company has put in place the requried technical resources to continue to improve the site. The GAMMA - something that will last 1-2 years will be working on improving the terminology of the concept, building out stateful features, implementing novel data storage systems, and improving the UI of the site. Additionally, various client-facing applications will be developed to help end users interact with CORE.HOST™


What are you working on now?

The last 4 months we've been working on developing network routines and logging proceedures that are reasonable to manage the site and remain compliant as we grow. We're doing this without giving up on the concept of client identity security. This was an essential first step that is mostly tight at this point. Many of the early systems that dealt with email as a point of identity have been eliminated and replaced with more novel concepts.


When we open things up after the GAMMA, we are planning to keep the site safe in various common and some even inventive or abstract ways too. We've been experimenting around with the correct way to do many types of system moderation without becoming a zombie operations organization. We've made a lot of progress and developed some interesting methods to get data-operations into a position to move smoothly with 100+ simultaneous active users.


Another goal that became clear during the "ALPHA" was eliminating passwords in the final release-ready version. Having passwords be an afterthought when possible seemed in line with where other players in the industry are moving towards. However, there wasn't really a logic way to do this out of the box so we're experimenting around with new ways to make this happen without giving up on functionality. We have a solid replacement lined up now that has to do with keys.


What are you looking to provide to everyone soon?

We will provide reasonable I/O routing for meta-links to any allowed location within our system. This seems obvious and it's fairly trivial, but there is a lot to do around this concept beyond what has been done already. We're focused on those things mostly, although many of the features are only experimental for now. In the GAMMA, we're going to work most of them out into a production-ready format.


The most nodes connecting at once so far has been 18 and there is no reason to think that you couldn't scale this up to some nth resource amount if we implement the serve capacity. Mostly all of the work so far was just a trial and we'll most likely restart the entire engine from absolute scratch upon launch. If we can get our C.ORE™ system into a range that is legal and acceptable, we'll even think of rolling this out with more features too. We're talking about the best way to do that now and it needs to have some integrity to it outside of a 1 human operators capacity. What makes BTC novel and secure is the amount of power it consumes. This is both a good and a bad thing, but it's something that makes it hard to control without controlling all the power itself.


Many featires are required to facilitate interacting online, how many are built?

You need quite a few. For instance, the past month we worked on a route denial mechanism after getting absolutely smoked by bots for just allowing a meta-link to google.com - and it's not something you really think about much. Another thing we were working on is direct communication. It doesn't seem hard and it's not if you do it the way it's done in most systems, but what if you do it the right way? It then becomes a question of what is right and how do you do that. We're taking that approach with most of the ways you end up needing to interact online and going from what is the right way to do it first.


A good thing came out of this way of working though problems. For 1, we logged over 42,000 different malicious strings that were being used to attack the server and have built a way of mitigating against them by default in the future. Now, when the same attacks happen, they fail due to our implementation. Another good thing that comes from that is you start seeign around corners and putting in place measures to eliminate unrelated or new attacks in the future. We have a robust dataset of techniques that were used to try to get in or take down the server. Some of them worked during alpha that didn't work during the beta, and even more will not work during the GAMMA that worked up until recently. Overall, the site is becoming much harder to take offline and we've even employed some hackers to try to take it down with various known exploits. At first they were able to do it, but now they're finding it much more challenging.


Many of these mitigation defense systems are requried if we're going to host an access point for a nth level of interacting clients. When you have this type of information derived, measures can be built in so that many destructive behavior is auto ignored once. That's why it was valuable to develop this type of information. Overall, we got some unintended value by slowing down and just observing the way the server behaved without bounds put in place. It turns out that bigger companies don't understand how to use this type of information in many cases even though they're also generated similar data. Many of these businesses don't know how to use what data they have available to make their systems even stronger or more 'tough'.


One of the main features we needed was a signal denial mechanism to handle various types of signals and that had to be present before moving into a GAMMA so we could disable various types of connections when requried. We don't want to make any choices regarding speech, so we're going to get around a lot of tricky-details by careful design upfront. We'll be able to do this mainly by the developed signal processor we've developed.


I just get one of these meta-links and I can use the internet without browsing?

Something like that, but it's more about defining what a specific access point can do and then coming back to it when you need to do something like that again later.


Will this system be transparent and fully open source

That idea is good and all, but not practical when you look at all the parts. Our main systems will be open source, but certain backend extensions may remain owned by another party that doesn't grant those rights. In all cases, we will look to make sure our extended features and backend/office functions use open source as often as possible and always as a first choice.


You tested various methods of nodes connecting to each other, what turned out to be the best?

We found that the best way to do this is to store no data at all by default, outside of signal information. There are many tough questions that come back to this concept. In a controlled scale you can store data and be okay, but at scale? It's much more difficult to be compliant and useful if you're storing too much user data so we're building a system that stores very little of it by default. The frameworks we had in mind during testing were tested, but many were not good enough so we ended up just rethinking everything there. "live free" only works when you collect very little data. Once data-generation is opened up to the wild we don't want to alter the system at all and if that's the case, we have to start by not collecting very much of it at all. There may be 3-4 engineers tops needed to manage this site at scale. Because there is an 0S, I/O Frontend and Backend, Node management software, Cryptographic protocols, and other Ram/Disk tech, a lot of the management will be done on providing this infrastructure and not managing peoples personal data (something we don't really want to do at all).


Based on the current design that we're experimenting around with, we could grow into a large network of connections rather quickly if we open key generations to the public without authorization codes. If we release our full software suite into the wild too fast it may have to be changed too much, so we're trying to change as much in a controlled state as possible, then once there aren't any obvious issues, make a generally available system. Now that the possiblity of critical failures are fading, each new release and trial gets closer to a public product offering.


If the web is broken, can you fix it with new ways of helping others connect and share information?

We don't want the site to operate outside of a controlled state period, but we want people to gain agency over what their inputs/outputs within that system end up being. Zero moderation at all isn't inline with where the law is moving towards so there needs to be a balance of features and not needing centralization. We do need various clever mechanisms built to forecast the future of where things are going both to get users and stay compliant. For now, it's about consistency in vision, so having only 1 engineer is actually the best possible thing for the site.


I'm getting a 400/500 error when I landed on the site, what's up with that?

If you see a 500 error on the site, it is most likely out of memory, in the middle of a botched integration, or being tested somehow. Sometimes people will do big crawls of sites and before the memory manager wasn't set to kill off unreasable connections of say, days at a time, or connections that were sending 50 requests at once over and over again without any boundries. We'll make sure this issue is gone eventually and it's getting better, but the site will have double the resources available as soon as next week but that still wouldn't be enought to handle all bad behavior, so technology has to be implemented to do the rest. Really, during the alpha/beta it was probably overloaded and not set to scale then. That's the most likely reason the site may be throwing an error but those errors are starting to fade now.


Also, when too many people are connecting to the same link it was only set to scale to a certain point, then just run out of memory - this was intended and not random. It helped us develop rate limiting behavior to handle various connection types. That's one of the issues with a controlled BETA - finite resources on purpose, both as a cost tactic and because failure isn't meant to be a forever feature...


Do bots effect me, or just the site itself?

The only way a bot knows you have a link is if you've posted it or made it public on the ledger and it isn't masked. As far as boths on the site: Those bots are isolated by default and will not be able to get your session data.


How are you going to fight bots as you grow?

We have mechanisms in place to prevent bot take downs, but of course, according to our logs, still get hit hard occasionally with various requests that are considered "of an unknown type". We will continue to listen to best practices about how to deflect much of those connection attempts because it could cause memory resources to dwindle or even make the service not work as intended. Everyone deals with it, but we're trying to tackle the problem.


The system for scaling memory is in place. But we don't want to scale memory to fight bots. We could use our cloud hosting provider to help with this too, but it can also be done within the application itself.


If we're working on a custom meta-linking scheme for a customer, they could make that connection I/O pool as strong as they needed, for a price of course.


What happens next?

The major structural tweaks and adjustments are almost completed. That means we'll move into a GAMMA that deals with the environment of the site and the production implementation of databases and networking. That'll be done during 2021 and 2022.


After taking the needed time to get infrastructure/logging/compliance within the control mechanism required to make the service possible at scale, we can now expand to more users but users aren't as important as the methods being reliable. We will double our memory in the cloud again at the start of next year and 10X the allowed simulation users by March 1 2022. we really don't want to add another server in the USA to handle any I/O stuff yet, but will be able to eventually.


When we do add multiple servers, the main site itself will have a specific purpose while the added servers end up taking on specific qualities. This is the way we previously had things running and the methods are already tested/vetted to perform them again in the future. With both methods tested, it is better to do the method of seperating our site here and the I/O parts themselves elsewhere but it's not required yet. We now have two versions of that software for connecting between each available resource. 1 written in Python and the other is written in Node.


Even if you deploy many full sites like the one here across many servers around the USA and register each one on the ledger, you'll still be better off serving one static site with a cache store in various locations than only having one access point. This allows us to distribute some sites static for I/O read-only transactions, and also provide instance/starting points for stateful applications that live in just one location. If you're storing and gathering your data with our tools, you'll be managing what method works for you. We're thinking of giving away that I/O interface code under an open license. That would give you your own private cloud hosted DNS retrieval system with some extended meta-link features. Any release would most likely be the code discussed above and it would be open source and made available under a proper licnese. We've run that code as a trial internally and will give it away under the right license once that's decided.


Closing statements:

There is a debate about what the best set of licenses actually looks like. We're looking into that now and everything software release wise will be updated to provide people with the access to code. There is a chance we may design a completely new license from scratch with the input of trusted legal resources.


Once we have the software distribution model working within a reasonable control threashold, we're going to release more of it to everyone. Before that happens we'll have to negotiate some discounted cloud rates that help us manage costs for the extended future if we end up self hosting all of our own assets. It would be great to get 12-16 additional clouds set up before a public roll out to handle distribution of assets such as software packages.


There are standard login portals here now but we're going to be replacing them soon with the udpated methodology of accessing CORE.HOST™. The goal is to have no usernames or passwords at all, but there are ways to set them up once you're active. You'll simply get a key with an experation date. If you use it, you'll be able to set up an account on your own computers and get the valid public/private key store that's attatched to it. So the required I/O transaction code will be open source, but you will need a valid-key to talk to CORE.HOST™.




Summary: Alpha


The alpha was all about talking with many technology professionals about the future of computers and AI. There were many open projects built out during this time - and some custom work was done to think through robotics endpoint management and the future of activity records online.