抗议引渡条例修订,香港爆发大规模示威游行

本人支持香港市民的示威游行,正如30年前香港支持我们的示威游行。

周日,大批示威者在香港街头进行了数小时的游行,抗议一项拟议中的将允许北京方面把犯罪嫌疑人引渡至中国内地受审的法律,这是自1997年英国将香港主权交还中国以来规模最大的一场挑战中国对香港影响力的游行。

据组织者估计,有超过100万人走上街头,几乎占到香港居民总数的七分之一,他们要求香港和北京的政治领导人搁置上述法律。据警方估计,周日这场示威游行高峰期的抗议者人数达到24万。

一支包含年轻家庭、学生、专业人士以及老人的蛇形队伍在香港的街头蜿蜒前行,反映出北京方面收服香港的最新举措面临前所有未的广泛反对。批评人士称,拟议中的法律可能遭到滥用、成为针对政治异见人士的工具,还将让香港公民暴露在中国内地较为不透明的法律体系之中。在内地,被拘留者可能被不公正地收监,权益可能遭到侵犯。

Source: 抗议引渡条例修订,香港爆发大规模示威游行 – 华尔街日报

中国经济和人民币前景再度趋暗

结果今天国家队把A股拉到让空头怀疑人生……

中国周一公布了5月份对外贸易数据,让市场首次了解到在5月10日白宫最新上调对华关税举措生效后中国的出口状况表现如何。5月份出口同比增长1.1%,表现好于4月份下降2.7%的情况,但部分原因可能在于出口商在关税上调前赶在5月初提前发货。更加令人担忧的是,5月份进口同比下降8.5%,创下自2016年年中以来的最大降幅,表现远不及4月份增长4.0%的情况。在此之前,有其他迹象表明中国内需弱于预期,如5月份中国制造业采购经理人指数(PMI)中的新订单指数大幅下滑,而且钢铁价格不断下跌。

Source: 中国经济和人民币前景再度趋暗 – 华尔街日报

If it’s interpretable it’s pretty much useless.

做机器学习的时候要想清楚自己的目标是什么。是获得模型,还是获得预测能力。
前者是统计学家,后者是数据科学家。

If your model doesn’t have the same performances on the training set and in the live environment is not a matter of trust, but a problem either in your dataset or in your testing framework. Trust is built on performances and performances on metrics: design the ones that work for your problem and stick to them. If you’re looking for trust in interpretability you’re just asking to the model questions you already know the answers, and you want them to be provided in the exact way you are expecting them. Do you need machine learning for building such a system? The need of ML arises when you know questions and answers but you don’t know an easy way to get from one to the others. We need a technique to fake the process, and it might be that an easy explanation for it doesn’t even exist.

(略)

I’m not a fanboy, and the more I know about machine learning, trying to build some real products out of it, the most I loose interest in this kind of discussions. Probably, the only useful thing about ML is in its ability to replicate processes that aren’t easy to describe explicitly: you just need questions and answers, the learning algorithms will do the rest. Asking for interpretability as a condition for real world usages is undermining the foundations of the whole field. If the trained model has good performances and it’s not interpretable we are probably on the right track; if it’s interpretable (and the explanation is understandable and replicable) why loosing weeks and GPU power? Just write some if-else clauses.

Source: If it’s interpretable it’s pretty much useless. – Massimo Belloni – Medium