Automated API traversal

0  

Armed with a thesaurus and an almanac of system functionality we can write robots that program themselves

YAML 想法

In Restful HATEOAS design, web applications provide endpoints that provides a list of web resources related to the current request that can also be introspected in the API.

A restaurant resource has links or URLs to a booking resource because you can book a restaurant.

A system should publish an endpoint that is an almanac of system functionality, that is, every endpoint it has, a thesaurus of keywords used to access that endpoint and a thesaurus of operations that it supports.

A system should also publish a series of workflows that it expects people to use.

This way we can write a fuzzy logic for a system based on a rough description of what to do - based on the thesaurus and almanac of a system.

"Export all my tweets to file"

All has an thesaurus entry for "list", "listAll", "getAll".

So the service knows it has to loop over this collection and save all fields to a file.



(别通知) (可选) 请,登录

我认为,应该有一个用于 REST API 的 nnn 版本。 REST API 作为文件系统,然后扩展 nnn 工具来处理 .json文件会做到这一点。然而,我发现 FUSE 的性能不是很好,Linus Torvalds 有句名言说 FUSE 文件系统只不过是玩具......

I think, there should be a version of nnn for REST APIs. The REST API as a filesystem, and then extending the nnn util to handle .json files would do it. However, I found that FUSE is not very performant, and Linus Torvalds famously says FUSE filesystems are nothing more than toys...


//但仍然必须对其进行显式编码。

但是,您提出的内容(同义词库和系统功能历书)也必须进行显式编码,不是吗?

您说:“系统应发布一个端点,该端点是系统功能的历程,即,它拥有的每个端点,用于访问该端点的关键字词库以及它支持的操作词库。”

您需要对这些系统进行编码以发布其功能,因此必须修改每个具有功能的系统才能发布它。您的方法将如何避免对显式编码进行修改的需求,以修改这些系统以将其自身的描述发布到“同义词库和年鉴”?

// But they still have to be explicitly coded.

But what you propose (the thesaurus and an almanac of system functionality) would also have to be explicitly coded, wouldn't it?

You say: "A system should publish an endpoint that is an almanac of system functionality, that is, every endpoint it has, a thesaurus of keywords used to access that endpoint and a thesaurus of operations that it supports."

You need to code those systems to publish their functionality, so you'd have to modify every system that has functionality, to be able to publish it. How would your approach avoid this need for explicit coding to modify those systems to publish descriptions of themselves to the "thesaurus and an almanac"?


我熟悉诸如Drools之类的专家系统,该系统使用了非常聪明的rete算法。而且我了解OpenAPI

但是它们仍然必须进行显式编码。

I'm familiar with expert systems such as Drools which uses the rete algorithm which is really clever. And I know about OpenAPI

But they still have to be explicitly coded.


但这已经是API的工作方式,该历书被称为“文档”,并且通常已经可以在机器上读取(请阅读[核心协议](https://thecoreprotocols.org)和[open api](https:// www .openapis.org)?)。

好吧,它们有一个局限性,它们不返回与其对象类型相关的词汇,并且Mime-Types或Content-Types信息不足。如果这些API返回了JSON-LD响应,或者仅使用元格式的[polycontext metasymbol](https://0oo.li/method/863/metaformat)装饰其响应,它们就可以将它们绑定到概念中定义的架构通过多上下文元符号链接,每个人都可以在遍历API的同时检索有关所有数据的所有信息。

一堆现有的解决方案需要人工工作,并且已经可以工作(被称为专家系统),才能实现这种一厢情愿的想法,因为存在许多记录不充分的软件,其中软件类型与本体论(语言学)的联系不佳)类型。

要创建这种语言联系,以便每个人都可以通过想到一个人想知道的东西就自动在众多数据库中生成查询,这需要训练一个AI系统,以从人类示例中进行映射。有两层:

-人类语言中的同义词

-概念ID(可在Wikidata中找到)

-类名(可以在OOP软件中找到)

-表名(可以在数据库中找到)

通过定义一个好的[polycontext metasymbol](https://0oo.li/method/863/metaformat),所有这些都可以完美地联系起来(我认为我们应该为人类提出并发展多上下文元符号。一种类似的方式,即我们通过RFC来发展协议的方式),最终将具有在所有协议和所有信息系统中能够真正超然地推理这种方式的理想属性。

But this is how APIs work already, that almanac is called "documentation" and it is already often machine-readable (read the core protocols and open api?).

Well, they have a limitation, they do not return vocabularies associated with their object types, and Mime-Types or Content-Types are not sufficiently informative. They could, if those APIs returned JSON-LD respones, or simply decorating their responses with metaformat's polycontext metasymbol would be enough to bind them to schemas defined in concepts also linked via polycontext metasymbol, and everyone could reason about everything while retrieving data about everything, traversing APIs.

A bunch of existing solutions that require human work, and already work (they are called expert systems) for this wishful thinking to get realized, because there exists a lot of not very well documented software, where software types are poorly linked with ontological (linguistic) types.

To create that linguistic connect so that everyone could auto-generate queries in numerous databases just by thinking that one would want to know something, will require the training of an AI system to map it from human examples. There are a couple of layers:

  • synonyms that are part of human language
  • concept IDs (that you can find in Wikidata)
  • class names (that you can find in OOP software)
  • table names (that you can find in databases)

All of that, can be beautifully linked up by defining a single good polycontext metasymbol (I think we should come up with and evolve the polycontext metasymbol for humanity, in a similar way that we evolve protocols, through RFCs), and that would allow to eventually have that desired property of being able to reason this way, really transcendentally, in all protocols and with all information systems.


语言