Ridesharing 3.0: Forget About Uber
w加速器官网地址
Potplayer下载|p播放器(Potplayer)v1.7.21223 安装版 - 光行资源网:1 天前 · Potplayer也就是众前网友口中众P开头的播放器,该播放器中文绿色版,强大的内置硬件加速解码器,高清多媒体影音播放器,全面支持32位和64位系统。软件简介:PotPlayer 64位
I have another solution: let’s rebuild the service without any major company involved. Let’s help software eat the world on behalf of the users, not the stockholders. In this post, I’ll explain a way to do it. It’s certainly not trivial, and has some risks, but I think it’s possible and would be a good thing.
The basic idea is similar to this: riders post to social media describing the rides they want, and drivers post about the rides they are available to give. They each look around their extended social graph for posts that line up with what they want, and also check for reasons to trust or distrust each other. That’s about it. You could do this today on Twitter, but it would take some decent software to make it pleasant and reliable, to make the user experience as good as with Uber. To be clear: I’m aiming for a user experience similar to the Uber app; I’m proposing using social media as an underlying layer, not a UI.
What’s deeply different in this model is the provider of the software does not control the market. If I build this today and get millions of users, someone else can come along tomorrow with a slightly better interface or nicer ads, and the customers can move easily, even in the middle of a ride. In particular, the upstart doesn’t need to convince all the riders and driver to switch in order to bootstrap their system! The market, with all its relationships and data are outside the ridesharing system. As a user, you wouldn’t even know what software the other riders and drivers are be using, unless they choose to tell you.
With this approach, open source solutions would also be viable. Then the competition could arise quite literally tomorrow, as someone just forks a product and makes a few small changes.
This is no fun for big money investors looking for their unicorn exit, but it’s great for end users. They get non-stop innovation, and serious competition for their business.
There are many details, below, including some open issues. The details span areas of expertise, so I’m sure I’ve gotten parts incomplete or wrong. If this vision appeals to you, please help fill it in, in comments or posts of your own.
银河加速器苹果版入口-hammer加速器
Perhaps the hardest problem with establishing any kind of multi-sided market is getting a critical mass of buyers and sellers. Why would a rider use the system, if there are not yet any drivers? Why would drivers bother using the software when there are no riders? Each time someone tries the system, they find no one else there and they go away unhappy.
In this case, however, I think we have some options. For example, existing drivers, including taxi operators, could start to use the software while they’re doing the driving they already do, with minimum additional effort. Reasons to do it, in addition to optimistically wanting to help bootstrap this: it could help them keep track of their work, and it could establish a track record for when riders start to show up.
Similarly, riders could start to use it in areas without drivers if they understand they’re priming the pump, helping establish demand, and perhaps there were some fun or useful self-tracking features.
Various communities and businesses, not in the ridesharing business, might benefit from promoting this system: companies who have a lot of employees commuting, large events where parking is a bottleneck, towns with traffic issues, etc. In these cases, in a niche, it’s much easier to get critical mass, which can then spread outward.
Finally, there are existing ridesharing systems that might choose to play in this open ecosystem, either because their motivation is noncommercial (eg eco carpooling) or because they see a way to share the market and still make their cut (eg taxi companies).
银河加速器苹果版入口-hammer加速器
In the model as I’ve described it so far, there’s no privacy. If I want a ride from San Francisco to Sebastopol, the whole world could see that. My friends might ask, uncomfortably, what I was doing in Sebastopol that day. This is a tricky problem, and there might not be a perfect solution.
In the worst case, the system ends up as only viable for the kind of trips you’re fine being public, perhaps your commute to work, or your trip to an event you’re going to post about anyway. But we can probably do better than that. I currently see two imperfect classes of solution:
- Trust some third party organizations, perhaps to act as information brokers, seeing all the posts from both sides and informing each when there is a match, possibly masking some details. Or perhaps they certify drivers, which gives them access to your data, with an enforceable contract they’ll use it for these purposes only.
- Trust people to act appropriately when given the right social cues and pressure: basically, use advisory access control, where anyone can see the data, but only after they clearly agree that they are acting as part of the ridesharing system and that they will only use the data for that purpose. There might be social or legal penalties for violating this agreement.
There might also be cryptographic solutions, perhaps as an application of homomorphic encryption, but I’m not yet aware of any results that would fully address this issue.
银河加速器苹果版入口-hammer加速器
When I was much younger, hitchhiking was common. If you wanted to go somewhere without having a car, you could stand on the side of the road and stick out your thumb. But there was some notion this might be dangerous for either party, and in some places it became illegal. (Plus there was that terrifying Rutger Hauer and C Thomas Howell movie.) There have been a few stories of assaults by Uber drivers, and the company claims to carefully vet drivers. So how could this work without a company like Uber, standing behind the drivers?
There are several approaches here, that can all work together:
- Remote trust assessment. Each party should be able to see data on the other before agreeing to the ride. This might include social graph connections to the other person, reviews posted about the other person (and by whom), and official certifications about the other person (including even: the ride is from a licensed taxicab). When legally permissible this should, I think, even include information that might be viewed as discriminatory, like Bla Bla Car’s “Ladies Only” setting. It’s a tough trade-off, but I don’t think in a system like this anyone should be forced to ride with someone they’re not comfortable with. Hopefully an active and diverse enough market will allow everyone to get a ride.
- Immediate trust assessment. The expectation has to be set that people can back out of the deal at any point. The person drives up and their car doesn’t look like in the picture? You were expecting two passengers and there are three? The driver didn’t turn when you expected them to? In each of these cases, there needs to be a clear, “Actually, no thanks” mechanism, for ending the process right there. Perhaps the person doing the cancellation incurs a small fee, 5-10 minutes salary; not enough that they would endanger their safety for money, but enough to stop frivolous cancellations. (The actual terms would be settled in messaging between apps before the ride.)
- Accountability and evidence. Each part posts details of the arrangement and progress of the ride, so if something inappropriate does happen, there is a trail of evidence. There can be 3rd party log-bots which archive the details in case one of the parties decides to delete or edit theirs (on systems which support those operations). There can be logged photos and even live streams, and a culture than encourages that. You post a geotagged photo of the car arriving to pick you up, and the driver, because that’s part of a convincing review; it just happens that postings like that would make it much harder to get away with any crime involving the system.
银河加速器苹果版入口-hammer加速器
One of the great things about the Uber experience is not having to think about money, dig for cash, or decide how much to tip the driver. For me, personally, it feels good to end the ride with thanks, instead of payment, even though I know I’m actually paying.
I think this part’s pretty easy to decentralize. Each party can post the details about pricing and the payment mechanisms supported, including cash, check, credit card, paypal, venmo, square cash, and bitcoin. Some of these resolve or settle later, but the same reputation/trust mechanism used for personal safety should, I think, be able to handle this. If the transaction is canceled hours after the ride, the aggrieved party should be able to make evidence of this clear in their review.
In the worst case, there could be safe-payment services that agree to absorb some of the risk of the transaction, giving more guarantees to both parties, in exchange for higher fees. One of the companies trying to compete with Uber today might consider going into this business, perhaps along with the driver-certification business. They can be like IBM supporting open source to beat back the Microsoft monopoly: they’ll never have the scale to beat Uber by themselves, but if they join the team of “everyone else”, they can probably carve out a nice market segment for themselves.
Now we turn to details about the use of social media to decentralize applications.
银河加速器苹果版入口-hammer加速器
Which social media should these ridesharing apps use for posting their announcements and looking for announcements from others? Twitter, Facebook, Mastodon, Instagram, … or something set up just for this service? LinkedIn?
I think the answer is: all of the above, if they include the necessary functionality, as detailed below. Right now, I think that’s Twitter and Mastodon, but there may be a way to make other systems behave as needed.
银河加速器苹果版入口-hammer加速器
Of course, no one wants to see a lot of irrelevant ridesharing posts. This is a special case of the general problem of Context Collapse: when we try to unify communications, to get economies of scale and critical mass, we can end up involved in a lot of unwanted communications.
I see three solutions here. I’m most optimistic about the last one:
- Keep the systems separate. Make a separate social networking system for ridesharing. But that would be hard, and ridesharing is only one of many, many applications we’d like to decentralize. If there’s a huge hassle and/or expense for each one, we won’t get that decentralization.
- Use separate accounts for each user for each app. I could use sandhawke_ridesharing on Twitter, etc, to keep this traffic separate. But it will still show up in search results, and I actually want to be able to leverage the existing social graph, not make a wholly new one.
- Zones, or opt-in hashtags. With this approach, “zone posts” are only seen by people who are looking for them and systems which are built to use them. This is worthy of a post of it’s own, so I’ll do that soon.
Interoperability
What would the actual posts look like? Current software industry trends would suggest something like {“rideFrom”:{“lat”:42.361958, “lon”:-71.09122}}. This kind of JSON data works well when one organization controls the data format, but it breaks down when people do not all agree on all the syntactic and semantic details of the format. Even if people are inclined to agree, actually getting it well specified is difficult and time consuming. (Ask anyone who’s been involved in a data interchange standardization effort.)
搜狗高速浏览器下载2021官方下载电脑版_纯净下载:2021-3-13 · 二级加速:下载加速器――下载变得更快更简单。内置下载管理器,通过多线程多镜像的下载模式全面提升文件下载速度,比IE下载快1.5倍众上。可媲美业界最快的下载软件 三级加速:不卡不死――开100个标签都不会卡
My favorite approach is to use natural language sentence templates. Instead of all agreeing we’ll use JSON and the ‘rideFrom’ property, etc, apps use strings that read like natural language sentences, conveying exactly what they mean, but which can easily be parsed using declared input templates. This concept also needs a post of its own, so I’ll go into that separately.
Reach
It’s great if I can catch ride from an existing contact, but in most cases, I will need to reach farther out my social graph. Some systems, like Twitter, make that public. Some don’t. Twitter’s graph, however, is not endorsement or trust. I follow a few unpleasant and untrustworthy people to keep track of what they’re up to. So we need some trust data in the social graph. This is hard.
Some wild ideas:
- Before relying on a path through the social graph, software could ping all the people on that path asking them to confirm these connections are based on trust. But often responses will be too slow, if they come at all. Still, at least the responses could be used later. People might be motivated to respond by having an important and trusted role in the process. On the other hand, people might hesitate to disclose how much they trust or fail to trust some other people.
- Sentiment analysis of replies?
- Confirming by using multiple social networks? For me, LinkedIn connections probably convey the most trust, but I understand that varies.
- More ideas…? How do we draw out people’s judgments of who else is trustworthy?
An alternative to having the social graph be visible is to have user-configured bots which automatically boost some posts on the user’s behalf. If a friend of mine is looking for a ride, my bot can post that my friend is looking for a ride, etc.
永久免费加速器排行
That’s what I’ve got. Looking back, I can see it’s quite a lot. Have I made anything harder than it needs to be? Is there a nice MVP that can ignore the complex issues? Are there other problems I’m missing?
Back to Blogging
vs加速器下载
As I remember it, about ten years ago, I started this blog for one main reason. I had just watched a talk from the CTO of Second Life (remember them, when they were hot?) about his vision for how to expand by opening up the system, making it decentralized. I thought to myself: that’s going to be really hard, but I’ve thought about it a lot; I should blog my ideas about it.
As I dug into the problem, however, I realized how many sub-problems I couldn’t really solve, yet. So I never posted. (Soon thereafter, Cory Ondrejka left the Second Life project, moving on to run engineering at Facebook. Not sure if that’s ironic.)
【活动线报】免费领一个月腾讯网游加速器 – 兔兔资源网:今天 · 需在活动页面里安装下载腾讯电脑管家,然后签到 第一天签到领3天会员、第二天领5天会员,就众此类推。 一共可众领1个月腾讯网友加速器会员,经常玩吃鸡的上!
This time the “industry darling” I want to tackle first is Uber. Okay, it’s already become widely hated, but the valuation is still, shall we say, … considerable.
So, coming soon: how to decentralized Uber.
GrowJSON
June 30, 2014
I have an idea that I think is very important but I haven’t yet polished to the point where I’m comfortable sharing it. I’m going to share it anyway, unpolished, because I think it’s that useful.
网易MuMu模拟器-安卓模拟器-极速最安全:网易MuMu(安卓模拟器),是网易官方推出的精品游戏服务平台,安装后可在电脑上运行各类游戏与应用,具备全面兼容、操作流畅、智能辅助等特点,每天还会为您推荐火热的应用和好玩的游戏,给你带来电脑玩手游 …
The problem I’m trying to solve is at the core of decentralized (or loosely-coupled) systems. When you have an overall system (like the Web) composed of many subsystems which are managed on their own authority (websites), how can you add new features to the system without someone coordinating the changes?
RDF offers a solution to this, but it turns out to be pretty hard to put into practice. As I was thinking about how to make that easier, I realized my solution works independently of the rest of RDF. It can be applied to JSON, XML, or whatever. For now, I’m going to start with JSON.
Consider two on-the-web temperature sensors:
> GET /temp HTTP/1.1
> Host: paris.example.org
> Accept: text/json
>
< HTTP/1.1 200 OK
< Content-Type: text/json
<
{"temp":35.2}
> GET /temp HTTP/1.1
> Host: berkeley.example.org
> Accept: text/json
>
< HTTP/1.1 200 OK
< Content-Type: text/json
<
{"temp":35.2}
The careful human reader will immediately wonder whether these temperatures are in Celcius or Fahrenheit, or if maybe the first is in Celcius and the second Fahrenheit. This is a trivial example of a much deeper problem.
Here’s the first sketch of my solution:
> GET /temp HTTP/1.1
> Host: paris.example.org
> Accept: text/json
>
< HTTP/1.1 200 OK
< Content-Type: text/json
<
[
{"GrowJSONVersion": 0.1,
"defs": {
"temp": "The temperature in degrees Fahrenheit as measured by a sensor and expressed as a JSON number"
},
{"temp":35.2}
]
> GET /temp HTTP/1.1
> Host: w加速器
> Accept: text/json
>
< HTTP/1.1 200 OK
< Content-Type: text/json
<
[
{"GrowJSONVersion": 0.1,
"defs": {
"temp": "The temperature in degrees Fahrenheit as measured by a sensor and expressed as a JSON number"
},
{"temp":35.2}
]
I know it looks ugly, but now it’s clear that both readings are in Fahrenheit.
My proposal is that much like some data-consuming systems do schema validation now, GrowJSON data-consuming systems would actually look for that exact definition string.
This way, if a third sensor came on line:
> GET /temp HTTP/1.1
> Host: doha.example.org
> Accept: text/json
>
< HTTP/1.1 200 OK
< Content-Type: text/json
<
[
{"GrowJSONVersion": 0.1,
"defs": {
"temp": "The temperature in degrees Celcius as measured by a sensor and expressed as a JSON number"
},
{"temp":35.2}
]
雷霆加速器下载ios版本,雷霆加速器免费苹果IOS下载,雷霆 ...:这可能是最好的免费加速器,雷霆加速器提供800+ 节点全球覆盖,用专线加速低延迟来确保您的任何网络通信通道安全 下载雷霆加速器IOS版 安装步骤 1 先安装苹果官方软件TestFlight(已安装的直接看第2步),然后返回本页 ...
That’s the essence of the idea. Any place you might have ambiguity or a naming collision in your JSON, instead use natural language definitions that are detailed enough that (1) two people are very unlikely to chose the same text, and (2) if they did, they’re extremely likely to have meant the same thing, and while we’re at it (3) will help people implement code to handle it.
I see you shaking your head in disbelief, confusion, or possibly disgust. Let me try answering a few questions:
Question: Are you really suggesting every JSON document would include complete documentation of all the fields used in that JSON document?
Conceptually, yes, but in practice we’d want to have an “import” mechanism, allowing those definitions to be in another file or Web Resource. That might look something like:
> GET /temp HTTP/1.1
> Host: paris.example.org
> Accept: text/json
>
< HTTP/1.1 200 OK
< Content-Type: text/json
<
[
{"GrowJSONVersion": 0.1}
{"import": "http://example.org/schema",
"requireSHA256": "7998bb7d2ff3cfa2666016ea0cd7a379b42eb5b0cebbb1142d8f086efaccfbc6",
},
{"temp":35.2}
]
> GET /schema HTTP/1.1
> Host: example.org
> Accept: text/json
>
< HTTP/1.1 200 OK
< Content-Type: text/json
<
[
{"GrowJSONVersion": 0.1,
"defs": {
"temp": "The temperature in degrees Fahrenheit as measured by a sensor and expressed as a JSON number"
}
]
Question: Would that break if you didn’t have a working Internet connection?
No, by including the SHA we make it clear the bytes aren’t allowed to change. So the data-consumer can actually hard-code the results of retrieval obtained at build time.
ins加速器永久免费版
下载新版腾讯网游加速器,安装并启动加速器,登录QQ账号 ...:下载新版腾讯网游加速器,安装并启动加速器,登录QQ 账号,尽享极速体验 立即下载 1. 搜索“ ”,添加并加速“ 注册登录下载” 2. 打开并登录 平台 3. 下载,即可享受畅爽下载体验 腾讯公司 版权所有 ...
Question: Would the object keys still have to match?
No, only the definitions. If the Berkeley sensor used tmp instead of temp, the consumer would still be able to understand it just the same.
Question: Is that documentation string just plaintext?
I’m not sure yet. I wish markdown were properly standardized, but it’s not. The main kind of formatting I want in the definitions is links to other terms defined in the same document. Something like these [[term]] expressions:
{"GrowJSONVersion": 0.1,
"defs": {
"temp": "The temperature in degrees Fahrenheit as measured by a sensor at the current [[location]] and expressed as a JSON number"
"location": "The place where the temperature reading [[temp]] was taken, expressed as a JSON array of two JSON numbers, being the longitude and latitude respectively, expressed as per GRS80 (as adopted by the IUGG in Canberra, December 1979)"
}
As I’ve been playing around with this, I keep finding good documentation strings include links to related object keys (properties), and I want to move the names of the keys outside the normal text, since they’re supposed to be able to change without changing the meaning.
Question: Can I fix the wording in some definition I wrote?
Yes, clearly that has to be supported. It would be done by keeping around the older text as an old version. As long as the meaning didn’t change, that’s okay.
Question: Does this have to be in English?
No. There can be multiple languages available, just like having old versions available. If any one of them matches, it counts as a match.
The Web is like Beer
w加速器app
Lots of people can’t seem to understand the relationship of the Web to the Internet. So I’ve come up with a simple analogy:
Squid缓存伕理服务器安装部署 - 暮无雪伕码博客:2021-3-13 · 一、关于缓存伕理 1、伕理的工作机制作为应用层的伕理服务软件,Squid主要提供缓存加速和应用层过滤控制的功能。当客户机通过伕理来请求Web页面时,指定的伕理服务器会先检查自己的缓存,如果缓存中已经有客户机需要访问的页面,则直接将缓存中的页面内容...
For some people, sometimes, they are essentially synonymous, because they are often encountered together. But of course they are fundamentally different things
云加速器官方版下载 | 云加速器官方版 V1.0绿色版 下载 - 清风 ...:2021-9-22 · 旗舰加速器客户端官方版 v1.0 绿色版 2MB / 网络加速 / 5.0 铁通移动宽带加速器下载|四海互动游戏加速器 V2.04免费版 2MB / 网络加速 / 10.0 迅雷网游加速器免费版 v3.2.0.8244 安装版 7MB / 网络加速 / 5.0 IE终极加速器 V3.00 最新免费绿色版 527KB 6.7
But there are lots of other uses, too, somewhat more obscure. We could say the various chat protocols are the various Whiskeys. IRC is Scotch; XMPP is Bourbon.
gopher is obscure and obsolete, …. maybe melomel.
ssh is potato vodka.
I leave the rest to your imagination.
Note that the non-technician never encounters raw Internet, just like they never encounter pure alcohol. They wouldn’t know what it was if it stepped on their foot. Of course, chemists are quite familiar with pure alcohol, and network technicians and programmers are familiar with TCP, UDP, and IP.
The familiar smell of alcohol, that you can detect to some degree in nearly everything containing alcohol — that’s DNS.
自动加速器
February 27, 2014
The world of computing has a huge problem with surveillance. Whether you blame the governments doing it or the whistleblowers revealing it, the fact is that consumer adoption and satisfaction is being inhibited by an entirely-justified lack of trust in the systems.
Here’s how the NSA can fix that, increase the safety of Americans, and, I suspect, redeem itself in the eyes of much of the country. It’s a way to act with honor and integrity, without betraying citizens, businesses, or employees. The NSA can keep doing all the things it feel it must to keep America safe (until/unless congress or the administration changes those rules) and by doing this additional thing it would be helping protect us all from the increasing dangers of cyber attacks. And it’s pretty easy.
The proposal is this: establish a voluntary certification system, where vendors can submit products and services for confidential NSA review. In concluding its review, the NSA would enumerate for the public all known security vulnerabilities of the item. It would be under no obligation to discover vulnerabilities. Rather, it would simply need to disclose to consumers all the vulnerabilities of which it happens know, at that time and on an ongoing basis, going forward.
Vendors could be charged a reasonable fee for this service, perhaps on the order 1% gross revenue for that product.
Crucially, the NSA would accept civil liability for any accidental misleading of consumers in its review statements. Even more important: the NSA chain of command from the top down to the people doing the review would accept criminal liability for any intentionally misleading statements, including omissions. I am not a lawyer, but I think this could be done easily by having the statements include sworn affidavits stating both their belief in these statements and their due diligence in searching across the NSA and related entities. I’m sure there are other options too.
If congress wants to get involved, I think it might be time to pass an anti zero day law, supporting NSA certification. Specifically, I’d say that anyone who knows of a security vulnerability in an NSA certified product must report it immediately to the NSA or the vendor (which must tell each other). 90 days after reporting it, the person who reported it would be free to tell anyone / everyone, with full whistleblower protection. Maybe this could just be done by the product TOS.
NSA certified products could still include backdoors and weaknesses of all sorts, but their existence would no longer be secret. In particular, if there’s an NSA back door, a cryptographic hole for which they believe they have the only key, they would have to disclose that.
That’s it. Dear NSA, can you do this please?
For the rest of you, if you work at the kind of company the Snowden documents reveal to have been compromised, the companies who somehow handle user data, would you support this? Would your company participate in the program, regaining user trust?
Fix 303 with client-side redirects
April 6, 2012
I am trying to stay far away from the current TAG discussions of httpRange-14 (now just HR14). I did my time, years ago. I came up with the best solution to date: use “303 See Other”. It’s not pretty, but so far it is the best we’ve got.
I gather now the can of worms is open again. I’m not really hungry for worms, but someone mentioned that the reason it’s open again is that use of 303 is just too inefficient. And if that’s the only problem, I think I know the answer.
If a site is doing a lot of redirects, in a consistent pattern, it should publish its rewrite rules, so the clients can do them locally.
Here’s a strawman proposal:
We define an RFC 5785 well-known URI pattern: .well-known/rewrite-rules. At this location, on each host, the host can publish some of its rewrite and redirection rules. The syntax is a tiny subset of the Apache RewriteRule syntax. For example:
# We moved /team to /staff RewriteRule /team/(.*) /staff/$1 301 # All the /id/ pages get 303'd to the doc pages RewriteRule (.*)/id/(.*) $1/doc/$2 303
The syntax here is: comments start with a slash; non-comments have four fields, separated by whitespace. The first field is the word “RewriteRule”. The second is a regular expression. The third is a string with back-references into the regular expression. The fourth is a numeric redirect code. Any line not matching this syntax, or otherwise not understood by the client, is to be ignored.
Clients that do not implement this specification will function unchanged, not looking at this file. Clients that do implement this specification keep track of how many times they get an HTTP redirect from a given host. If they get three or more redirects during one small period of time (such as a minute, or one run of the client if the client is short-lived), they perform a GET on /.well-known/rewrite-rules.
If the GET succeeds, the result should be cached using normal HTTP caching rules. If the result is not cached, this protocol is less efficient than server-side redirects. If the result is cached too long, clients may see incorrect data, so clients must not cache the result for longer than permitted by HTTP caching rules. (Maybe we make an exception for simple-minded clients and say they MAY ignore cache control information and just cache the document for up to 60 seconds.)
If a client has a non-stale set of rewrite-rules from a given host, it should attempt to perform those rewrite rules client-side. For any GET, PUT, etc, it should match the URL (after the scheme name and colon) against the regular expression; if the match succeeds, it should perform the match-substitution into the destination string and use that for the operation, as if it had gotten a redirect (with the given redirect code).
As an example deployment, consider DBPedia. Everything which is the primary subject of a Wikipedia entry has a URL has the form http://dbpedia.org/resource/page_title. When the client does a GET on that URL, it receives a 303 See Other redirect to either http://dbpedia.org/data/page_title or http://dbpedia.org/page/page_title, depending on the requested content type.
So, with this proposal, DBPedia would publish, at http://dbpedia.org/.well-known/rewrite-rules this content:
RewriteRule /resource/(.*) /data/$1 303
This would allow clients to rewrite their /resource/ URLs, fetch the /data/ pages directly, and never going through the 303 redirect dance again.
The content-negotiation issue could be handle by traditional means at the /page/* address. When the requested media type is not a data format, the response could use a Content-Location header, or a 307 Temporary Redirect. The redirect is much less painful here; this is a rare operation compared to the number of operations required when a Semantic Web client fetches all the data about a set of subjects
My biggest worry about this proposal is that RewriteRules are error prone, and if these files get out of date, or the client implementation is buggy, the results would be very hard to debug. I think this could be largely addressed by Web servers generating this resource at runtime, serializing the appropriate parts of the internal data structures they use for rewriting.
This could be useful for the HTML Web, too. I don’t know how common redirects are in normal Web browsing or Web crawling. It’s possible the browser vendors and search engines would appreciate this. Or they might think it’s just Semantic Web wackiness.
So, that’s it. No more performance hit from 303 See Other. Now, can we close up this can of worms?
ETA: dbpedia example. Also clarified the implications for the HTML Web.
RDF Steps Carefully Forward
April 14, 2011
18 months ago, when Ivan Herman and I began to plan a new RDF Working Group, I posted my RDF 2 Wishlist. Some people complained that the Semantic Web was not ready for anything different; it was still getting used to RDF 1. I clarified that “RDF 2” would be backward compatible and not break existing system, just like “HTML 5” isn’t breaking the existing Web. Still, some people prefered the term “RDF 1.1”.
The group just concluded its first face-to-face meeting, and I think it’s now clear we’re just doing maintenance. If we were to do version numbering, it might be called “RDF 1.0.1”. This might just be “RDF Second Edition”. Basically, the changes will be editorial clarifications and bug fixes.
The adventurer in me is disappointed. It’s a bit like opening your birthday present to find nice warm socks, instead of the jet pack you were hoping for.
Of course, this was mostly clear from the workshop poll and the charter, but still, I had my hopes.
The most dramatic change the group is likely to make: advise people to stop using xs:string in RDF. Pretty exciting. And, despite unanimous support from the 14 people who expressed an opinion in the meeting, there has now been some strong pushback from people not at the meeting. So I think that’s a pretty good measure of the size change we can make.
As far as new stuff…. we’ll probably come up with some terminology for talking about graphs, and maybe even a syntax which allows people to express information about graphs and subgraphs. But one could easily view that as just properly providing the functionality that RDF reification was supposed to provide. So, again, it’s just a (rather complicated) bug fix. And yes, making Turtle a REC, but it’s already a de facto standard, so (again) not a big deal.
The group also decided, with a bit of disappointment for some, not to actively push for a JSON serialization that appeals to non-RDF-folks. This was something I was interested in (cf JRON) but I agree there’s too much design work to do in a Working Group like this. The door was left open for the group to take it up again, if the right proposal appears.
So, it’s all good. I’m comfortable with all the decisions the group made in the past two days, and I’m really happy to be working with such a great bunch of people. I also had a nice time visiting Amsterdam and taking long walks along the canals. But, one of these days, I want my jet pack.
Elevator Pitch for the Semantic Web
w加速器官网
SemanticWeb.com invited people to make video elevator pitches for the Semantic Web, focused on the question “What is the Semantic Web?”. I decided to give it a go.
I’d love to hear comments from folks who share my motivation, trying to solve this ‘every app is a walled garden’ problem.
In case you’re curious, here’s the script I’d written down, which turned out to be wayyyy to long for the elevators in my building, and also too long for me to remember.
Eric Franzon of SemanticWeb.Com invited people to send in an elevator pitch for the Semantic Web. Here’s mine, aimed at a non-technical audience. I’m Sandro Hawke, and I work for W3C at MIT, but this is entirely my own view.
W加速器版本下载:1、加速器不支持Youtube等视频网站刷量; 2、先试用正常后再购买套餐, 不支持退款。 iOS版本: 访问 https://d.skyjsq.space 下载新版Sky加速器,按照页面说明下载安装,账号通用。
It’s like this for nearly every kind of software out there.
雷霆加速器下载ios版本,雷霆加速器免费苹果IOS下载,雷霆 ...:这可能是最好的免费加速器,雷霆加速器提供800+ 节点全球覆盖,用专线加速低延迟来确保您的任何网络通信通道安全 下载雷霆加速器IOS版 真正的免费加速器,无任何流量限制 安装步骤 1 先安装苹果官方软件TestFlight(已安装的直接看第2步),然后返回 ...
In other areas, though, we’re stuck, because we don’t have these standards, and we’re not likely to get them any time soon. So if you want to create, explore, play a game, or generally collaborate with a group of people on line, every person in the group has to use the same software you do. That’s a pain, and it seriously limits how much we can use these systems.
I see the answer in the Semantic Web. I believe the Semantic Web will provide the infrastructure to solve this problem. It’s not ready yet, but when it is, programs will be able to use the Semantic Web to automatically merge data with other programs, making them all — automatically — compatible.
If I were up to doing another take, I’d change the line about the Semantic Web not being much yet. And maybe add a little more detail about how I see it working. I suppose I’d go for this script:
雷霆加速器下载ios版本,雷霆加速器免费苹果IOS下载,雷霆 ...:这可能是最好的免费加速器,雷霆加速器提供800+ 节点全球覆盖,用专线加速低延迟来确保您的任何网络通信通道安全 下载雷霆加速器IOS版 安装步骤 1 先安装苹果官方软件TestFlight(已安装的直接看第2步),然后返回本页 ...
w加速器安卓下载
Well, right now, it’s a set of technologies that are seeing some adoption and can be useful in their own right, but what I want it to become is the way everyone shares their data, the way all software works together.
This is important because every program we use locks us into its own little silo, its own walled garden
For example, imagine I want to share photos with you. If I use facebook, you have to use facebook. If I use flickr, you have to use flicker. And if I want to share with a group, they all have to use the same system
Sky加速器 - iOS 13设备安装方法:2021-5-27 · 4、安装完成后,可在app里登录原有W的账号或者自动注册新账号,在原客户端点击连接界面有效期那一行,可众查看账号和密码; 5、!!!切记!!!要再到 Appstore 应用里退出刚才登录的公用苹果账号; 6、如果自己有美区等苹果账号,也可众用自己的苹果账号 ...
I’m Sandro Hawke, and I work for W3C at MIT. This has been entirely my own opinion.
(If only I could change the video as easily as that text. Alas, that’s part of the magic of movies.)
So, back to the subject at hand. Who is with me on this?
w加速器下载安装
January 28, 2011
I’m disappointed in the pace of development of the Semantic Web, and I’m optimistic that the Lean Startup ideas can help us move things along faster.
I’ve been a fan of Eric Ries and the Lean Startup ideas for while, but last night I was lucky enough to get to see him speak, and to chat with some other adherents. There are a lot of ideas here, but the bit that jumps out at me today is this, loosely paraphrased:
w加速器-客户端下载:2021-3-16 · w加速器好用吗?w加速器下载地址在哪?w加速器安卓版和苹果版有什么特色?w加速器让你的网络畅通无阻,专享通道,不管是玩游戏还是看电影都能够非常快!1、清理网络:网络端口清理,网络下载 …
I think we have scant evidence that the Semantic Web will work, and that most of us have been working on this as an act of faith. We believe, without solid evidence, that it can work and will be a good thing when it does. You could say we’re operating in an RDF (resource description framework) RDF (reality distortion field).
The Lean Startup methodology says that we should get out of that field as quickly as possible, doing the fastest experiments possible that will teach us what really works and does not work. On faith we can do 5+ year projects, hoping to show something interesting. Instead, we should be doing ❤ month projects to test a hypothesis about how this is all going to be useful.
It’s a shame that most of us are funded in ways that don’t support or reward this at all. It’s a shame the research funding agencies operate on such a glacial and massive scale; in many ways they seem geared more towards keeping people busy and employed than actually innovating and producing knowledge for the world.
Below are my notes taken during Eric’s talk. I have not cleaned them up at all, so you can see just how badly my fingers spell “entrepreneur” when my brain has moved on to something else. I believe slides and the talk itself are available on line; it’s a talk he often gives, so if you have the time, watch it instead of just skimming my notes. (eg this one at Stanford.) Someone else with much better formatting and spelling posted their notes from last night’s talk. You probably want to read them instead, and then come back here and share your insights with us.
【活动线报】免费领一个月腾讯网游加速器 – 兔兔资源网:今天 · 需在活动页面里安装下载腾讯电脑管家,然后签到 第一天签到领3天会员、第二天领5天会员,就众此类推。 一共可众领1个月腾讯网友加速器会员,经常玩吃鸡的上!
Simplified RDF
November 10, 2010
I propose that we designate a certain subset of the RDF model as “Simplified RDF” and standardize a method of encoding full RDF in Simplified RDF. The subset I have in mind is exactly the subset used by Facebook’s Open Graph Protocol (OGP), and my proposed encoding technique is relatively straightforward.
I’ve been mulling over this approach for a few months, and I’m fairly confident it will work, but I don’t claim to have all the details perfect yet. Comments and discussion are quite welcome, on this posting or on the semantic-web@w3.org mailing list. This discussion, I’m afraid, is going to be heavily steeped in RDF tech; simplified RDF will be useful for people who don’t know all the details of RDF, but this discussion probably wont be.
My motivation comes from several directions, including OGP. With OGP, Facebook has motivated a huge number of Web sites to add RDFa markup to their pages. But the RDF they’ve added is quite constrained, and is not practically interoperable with the rest of the Semantic Web, because it uses simplified RDF. One could argue that Facebook made a mistake here, that they should be requiring full “normal” RDF, but my feeling is their engineering decisions were correct, that this extreme degree of simplification is necessary to get any reasonable uptake.
I also think simplified RDF will play well with JSON developers. JRON is pretty simple, but simplified RDF would allow it to be simpler still. Or, rather, it would mean folks using JRON could limit themselves to an even smaller number of “easy steps” (about three, depending on how open design issues are resolved).
银河加速器苹果版入口-hammer加速器
Simplified RDF makes the following radical restrictions to the RDF model and to deployment practice:
-
The subject URIs are always web page addresses. The content-negotiation hack for “hash” URIs and the 303-see-other hack for “slash” URIs are both avoided.
(Open issue: are html fragment URIs okay? Not in OGP, but I think it will be okay and useful.)
-
The values of the properties (the “object” components of the RDF triples) are always strings. No datatype information is provided in the data, and object references are done by just putting the object URI into the string, instead of making it a normal URI-label node.
(Open issue: what about language tags? I think RDFa will provide this for free in OGP, if the html has a language tag.)
(Open issue: what about multi-valued (repeated) properties? Are they just repeated, or are the multiple values packing into the string, perhaps? OGP has multiple administrators listed as “USER_ID1,USER_ID2”. JSON lists are another factor here.)
At first inspection this reduction appears to remove so much from RDF as to make it essentally useless. Our beloved RDF has been blown into a hundred pieces and scattered to the wind. It turns out, however, it still has enough enough magic to reassemble itself (with a little help from its friends, http and rdfs).
This image may give a feeling for the relationship of full RDF and simplified RDF:
银河加速器苹果版入口-hammer加速器
The basic idea is that given some metadata (mostly: the schema), we can construct a new set of triples in full RDF which convey what the simplified RDF intended. The new set will be distinguished by using different predicates, and the predicates are related by schema information available by dereferencing the predicate URI. The specific relations used, and other schema information, allows us to unambiguously perform the conversion.
For example, og:title is intended to convey the same basic notion as rdfs:label. They are not the same property, though, because og:title is applied to a page about the thing which is being labeled, rather than the thing itself. So rather than saying they are related by owl:equivalentProperty, we say:
og:title srdf:twin rdfs:label.
This ties to them together, saying they are “parallel” or “convertable”, and allowing us to use other information in the schema(s) for og:title and rdfs:label to enable conversion.
The conversion goes something like this:
-
The subject URLs should usually be taken as pages whose foaf:primaryTopic is the real subject. (雷霆加速器下载ios版本,雷霆加速器免费苹果IOS下载,雷霆 ...:这可能是最好的免费加速器,雷霆加速器提供800+ 节点全球覆盖,用专线加速低延迟来确保您的任何网络通信通道安全 下载雷霆加速器IOS版 真正的免费加速器,无任何流量限制 安装步骤 1 先安装苹果官方软件TestFlight(已安装的直接看第2步),然后返回 ... provides a gentle introduction to this kind of idea.) That real subject can be identified with a blank node or with a constructed URI using a “thing described by” service such as t-d-b.org. A little more work is needed on how to make such services efficient, but I think the concept is proven. I’d expect facebook to want to run such a service.
In some cases, the subject URL really does identify the intended subject, such as when the triple is giving the license information for the web page itself. These cases can be distinguished in the schema by indicating the simplified RDF property is an IndirectProperty or MetadataProperty.
-
Squid缓存伕理服务器安装部署 - 暮无雪伕码博客:2021-3-13 · 一、关于缓存伕理 1、伕理的工作机制作为应用层的伕理服务软件,Squid主要提供缓存加速和应用层过滤控制的功能。当客户机通过伕理来请求Web页面时,指定的伕理服务器会先检查自己的缓存,如果缓存中已经有客户机需要访问的页面,则直接将缓存中的页面内容...
Similarly, the Simplified RDF technique of puting URIs in strings for the object can be undone by know the twin is an ObjectProperty, or has some non-Literal range.
I believe language tagging could also be wrapped into the predicate (like comment_fr, comment_en, comment_jp, etc) if that kind of thing turns out to be necessary, using an OWL 2 range restrictions on the rdf:langRange facet.
银河加速器苹果版入口-hammer加速器
So, that’s a rough sketch, and I need to wrap this up. If you’re at ISWC, I’ll be giving a 2 minute lightning talk about this at lunch later today. But if you’ve ready this far, the talk wont say say anything you don’t already know.
FWIW, I believe this is implementable in RIF Core, which would mean data consumers which do RIF Core processing could get this functionality automatically. But since we don’t have any data consumer libraries which do that yet, it’s probably easiest to implement this with normal code for now.
I think this is a fairly urgent topic because of the adoption curve (and energy) on OGP, and because it might possibly inform the design of a standand JSON serialization for RDF, which I’m expecting W3C to work on very soon.