Outsider

CORE.HOST™

BETA
BETA











Cognitive ORE



core.host™ provides meta-link structures


Project Details

Recent updates: Updated 02/06/2021

- Doubled the servers RAM

- Doubled the servers CPU core-count

- Trippled the available disk ops/io

Overview for 2021: Updated on 02/04/2021

2020 was a hard year for many people and this project continues to be adjusted to meet the needs of today. We've been working on making sure various systems are in place to support this site at scale. The site is now owned by CORE.HOST, LLC - a Delaware Information Technology company.

What are you working on now?

The last 4 months we've been working on developing network routines and logging proceedures that are reasonable to manage and be compliant as we grow. This was an essential first step that has now been accomplished in our process of opening everything up to the public.

When we open things up soon we plan to keep the site safe in various common and some even inventive or abstract ways too. We've been experimenting around with the correct way to do many types of system moderation. We've made a lot of progress and developed some interesting methods to get things in a position to move smoothly with 100+ daily active users.

Another goal that became clear during the "ALPHA" was eliminating passwords in the final release-ready version. Having passwords be an afterthought when possible seemed in line with where other players in the industry are moving. We have a solid replacement lined up now.

What are you looking to provide to everyone?

We will provide reasonable I/O routing for meta-links to any allowed location within our system.

We are getting some early member feedback (all early feedback is generated by running the site locally on port 16788). The most nodes connecting at once so far has been 18. Anyway, mostly all the work was a trial and we'll most likely restart the entire engine from absolute scratch upon launch... If we can get our ORE crypto system into a range that is legal and acceptable, we'll even think of rolling this out with more features too. We're talking about the best way to do that now.

So how many different types of features do you need to support a completely new way of interacting online? Do they take a long time to build?

You need quite a few. For instance, the past month we worked on a route denial mechanism after getting absolutely smoked by bots. A good thing came out of this though: We logged over 42,000 different malicious strings that were being used to attack, all of which failed due to our planning, and now have a general database of techniques that were used to try to get in - many of these can be built to be auto ignored once you have that kind of information. So, there we got some unintended value. It turns out bigger companies don't understand how to use this type of information they have available to make their systems even stronger.

So yeah, a denial mechanism had to be present for disabling various types of connections after a trial. We don't want to make any choices regarding speech, so we're going to get around of a lot of tricky-details by careful design upfront.

So I no longer need to browse the internet, I just get one of these meta-links?

Something like that...

So, you're planning on making this a transparents and fully open source system?

That idea is good and all, but not practical when you look at all the parts. Our main systems will be open source, but certain back end extensions may remain owned by another party that doesn't grant those rights. In all cases, we will look to make sure our extended features and backend/office functions use open source as often as possible and always as a first choice.

You tested various methods of nodes connecting to each other, what's best?

After testing many of the best ways to provide this service over the years at a very controlled scale, it is important that we now expand with some of those frameworks in mind and make sure all the next steps continue course as a controlled state until it's ready to "live free". Once it's in the wild we don't want to alter the system at all. There may be 3-4 engineers tops needed to manage this at scale because over 600k lines of code has already been written accross OS/IO/Frontend/Backend/Nodes/Crypto/Ram/Disk. Based on the current design it could grow into quite large network of connections quickly based off of the trials. If we would have released it into the wild too fast last year it would've broken under variable use. Now that possiblity is fading with each new release.

So what does this mean? It means that until there are more engineers managing this at any point besides just the 1 now, we're not rushing to do anything. Our costs are fully optimized here now.

We don't want the site to operate outside of a controlled state period - because it's not inline with where the law is moving towards. We need various clever mechanisms built in to forecast the future of where things are going. For now, it's about consistency, so having only 1 engineer is actually not a bad thing at all. However, we need 1 or 2 more if we're going to grow it out.

I saw a 500 error when I landed on the site, what's up with that?

If you see a 500 error on the site, it is most likely out of memory. Sometimes people will do big crawls of sites and before the memory manager wasn't set to kill off unreasable connections of say, days at a time, or connections that were sending 50 requests at once. We'll make sure this issue is gone eventually, but the site will have double the resources available as soon as next week. Really, it's probably overloaded and currently not set to scale during the BETA. That's the most likely reason the site may be throwing an error.

When too many people are connecting to the same link it was only set to scale to a certain point, then just run out of memory.

That's one of the issues with a controlled BETA - finite resources on purpose, but not meant to be a forever feature...

Do bots effect me, or just the site itself?

The only way a bot knows you have a link is if you've posted it or made it public on the ledger.

How are you going to fight bots as you grow?

We have mechanisms in place to prevent bot take downs, but of course, according to our logs, still get hit hard occasionally with various requests that are considered "of an unknown type". We will continue to listen to best practices about how to deflect much of those connection attempts because it could cause memory resources to dwindle.

The system for scaling that memory is in place and happening mostly on the network platform management side now. However, for many users accessing the site, a basic connection will be fine nearly 100% of the time. If 100 users were on every day 24 hours a day, using it within a normal pattern, the system could probably handle that at this time. We'll see what happens if we find different ways to poweruser through-links.

If we're working on a custom meta-linking scheme for a customer, they could make that connection I/O pool as strong as they needed, for a price of course.

What happens next?

The major structural tweaks and adjustments are almost completed to really get moving this year. There are also some Federally protected proofs finished now too.

After taking the needed time to get infra/structure/logging/compliance within the control mechanism required to make the service possible at scale, we can now expand to more users.

We will double our memory in the cloud, 10X our allowed users by March 1, and add another server in the USA to handle any I/O stuff.

This server would then handle the main site itself. This is the way we previously had things running and the methods are already tested/vetted to perform in the future. With both methods tested, it is better to do the method of seperating our site here and the I/O parts themselves elsewhere. We now have two versions of that software for connecting between each available resource. 1 written in Python and the other is written in Node.

Even if you deploy many full sites like the one here across many servers around the USA and register each one on the ledger, you'll still be better off serving one static site with a cache store in various locations. This allows us to distribute some code that will do the I/O transaction, get you an instance/starting point, and gain a few early routes to resources that can actually use to store and gather your data. We're experimenting with a concept of giving away that I/O code. That would give you your own private cloud hosted DNS retrieval system with some extended meta-link features. Any release would most likely be the code discussed above and it would be open source under a proper licnese. We've run that code as a trial internally and will give it away under the right license once that's decided.

Closing statements:

There is a debate about what the best set of licenses actually looks like. We're looking into it now and will update everyone soon on what we decide. There is a chance we may design a completely new one from scratch with the input of trusted legal resources.

Once we have this model working within a reasonable control threashold, we're going to launch to everyone. Before that happens we will be negotiating some discounted cloud rates to help us manage costs for the extended future. It would be great to get 12-16 additional clouds set up before a public roll out.

There are standard login portals here now but we're going to be replacing them soon. The goal is to have no usernames or passwords at all. You'll simply get a key with an experation date. If you use it, you'll be able to set up an account on your own computer and get the valid-key. So this I/O transaction code will be open source, but you will need a valid-key to talk to CORE.HOST™.