魔兽世界npcscan(魔兽世界十大着实给玩家带来利益的BUG!)
魔兽世界npcscan文章列表:
- 1、魔兽世界十大着实给玩家带来利益的BUG!
- 2、pokemmo11.25极速转载机翻附原文
- 3、爱可可AI论文推介11月12日
- 4、Machine Learning Is Making Video Game Characters Smarter
- 5、小岛秀夫谈游戏设计 暗示死亡搁浅精打细磨
魔兽世界十大着实给玩家带来利益的BUG!
摘要:魔兽世界十大着实给玩家带来利益的BUG!魔兽世界从05年开始到今天已经10年了,期间出现过大大小小的BUG多达上万种,这些BUG中有许多并没有被官方处理,那我们今天来盘点下这些BUG!这些从未被处理过的BUG吧。
第十名:卡拉赞无限刷午夜马BUG
卡拉赞的一号BOSS猎手图阿门虽然战斗力一般不过他又几率掉落炽热战马的缰绳,小编从TBC开始打KLZ一直到BUG出现之前也只见过1只,掉落低的可怜。
不过在MOP版本的一段时间,可以通过BUG无限刷午夜马,这bug用阿图门上马后,午夜依然存在达到无限刷的目的。从而达到午夜尸体遍布屏幕。。。此BUG很快被暴雪修复,但是获得午夜马的玩家并没有被收回,着实给玩家带来了好处。
获得的玩家还可以获得成就:
第九名:哈兰无限刷牌子BUG 获得哈兰作战羊 BUG
TBC版本,位于外域纳格兰地图的哈兰,是联盟部落方经常开战的主战场,在这里击杀掉一名对立阵营的玩家即可获得一枚哈兰作战勋章,如果想换取哈兰作战山羊则需要100枚哈兰作战勋章和20枚哈兰研究勋章,另一只需要75枚哈兰作战勋章和20枚哈兰研究勋章,哈兰研究勋章需要用沃舒古水晶尘换取,沃舒古水晶尘是打野外小怪掉落的。
同阵营玩家利用术士自焚BUG无限获得哈兰作战勋章,只要在一个小队几分钟时间就可以凑出100枚哈兰作战勋章,换取哈兰作战山羊,后来出了救赎之魂天赋的神牧也可以刷. 死的方法就是骑鸟上天 然后摔死.。
这两个坐骑虽然都没有成就,但可以轻松凑坐骑个数。暴雪随后修复了这个BUG不过也没有收回坐骑。
暗色骑乘塔布羊缰绳:
暗色作战塔布羊缰绳:
第八名:奥丹姆淘尽永恒之沙卡位面骆驼坐骑BUG
在奥丹姆你要在奥丹姆找到神秘的骆驼雕像,属于NPC,绿名,但是不能用宏选中。这个雕像非常非常小,大概跟2个技能图标差不多大,NPCScan报警了也得仔细找。
注意,这雕像有两种,NPCSCAN ID分别为50409和50410,其中雕像50409被点击后会出现[沙漠风暴],持续6秒,把你传送到菲拉斯。50410会掉25G(50410貌似也会有低概率传送)。
被传送至菲拉斯后,你会发现成群被驯服的骆驼以及一名叫骆驼驭手多尔穆斯的精英怪。它的血量为271215,接着玩家会获得一个持续20分钟的Debuff多尔穆斯之怒。只要能够在20分钟内(意即Debuff消失之前)击败它,便可获得灰色骑乘骆驼缰绳(100%掉落)。
很长一段时间,很多玩家利用卡位面的方式可以可以无限获得骆驼坐骑。只需要将想获得这个坐骑的玩家组进队伍并飞往菲拉斯指定位置即可,队长可以无限击杀获得N只骆驼坐骑交易给小队里的玩家。
第七名:万圣节卡无限刷无头骑士BUG
也是TBC那个版本,玩家可以利用小号无视CD无限刷无头骑士,直到获得坐骑。
首先,要找一个小号,从来没有做过无头骑士任务的,也就是能开两次无头骑士的号。然后拉到副本里,先做外面接到的召唤无头骑士的任务(黄色叹号的),这时候你会得到一个蜡烛,然后把蜡烛摧毁。接着法师开好一个沙门,小号去接副本里自带的日常杀无头骑士的任务(蓝色叹号的日常),同样的你得到一个蜡烛,这时候,放弃副本里接到的任务(蓝色叹号的日常),查查看,发现包包里蜡烛还在。立刻回城,再让术士拉过来,重新接无头骑士任务,接到了跑过去开BOSS,注意:这个时候要同时先点出放弃任务的是与否,开BOSS的时候将无头骑士任务同时点放弃,鼠标速度要快。无头骑士开始喊话了,但南瓜头上的感叹号又变蓝了。如此反复重复刷。暴雪在后来修复了这个BUG但并未收回这段时间获得无头骑士缰绳坐骑和成就。
第六名:美酒节卡无限刷美酒节坐骑BUG
TBC这个版本美酒节BOSS位于黑石深渊副本,日常任务每天每人只能开一次。一个五人队伍只能开五次。但是最近有高人发现一BUG可以无限开BOSS,无限刷坐骑,这样只要你花上一定的时间就一定能刷出坐骑。
BUG原理:在完成25个日常任务以后,接美酒节任务依然能够交任务开BOSS。并且出副本后再次进副本又能接任务,这样在事先完成25个日常任务后,通过不断的进去副本来不断的接任务开BOSS。
用这种方法可以无限刷美酒节NPC,直到刷出两只坐骑为止。这个BUG存在了有2-3天的时间,直到修复,暴雪也并未收回通过这个BUG获得的坐骑和成就。
大型美酒节科多兽
迅捷美酒节赛羊
第五名:逐风者之剑卡脸BUG
雷霆之怒,逐风者的祝福之剑,是MC出品的一把具有MT象征意义的武器,不但外形出众、属性也是好的惊人。有了逐风剑仇恨不在是问题。但是想获得逐风剑需要很多步骤,最难的就是需要击败4号加尔和5号迦顿男爵,他们有一定几率获得左脸和右脸,都拿到后在经过一系列任务才可以获得逐风剑!在60年代有一个BUG,可以在你拥有一个脸后通过这种步骤获得下一个脸,小编在当时的版本真的测试过。。可行。。而且在TBC版本的时候用同样方法又卡出了一把。我只卡过2次但是两次都成功了。
BUG步骤:
1,你有半个头
2,你获得了老10的火焰精华任务物品,有任务.
3,你必须第一个进FB
4,把第一个包包清空(行囊).
5,把风头放在第一格(行囊),精华放在第三格.
6,从老一开始的所有BOSS任何人都不能拾取。
7,打任意一个BOSS的时候都不能死人
8,你自己去开BOSS.
9,风头出现.
第四名:拉格纳罗斯坐骑纯血火鹰BUG
在4.2版本火焰之地,H模式下最后守关BOSS 拉格纳罗斯会100%掉落坐骑纯血火鹰,普通模式下只有极小的几率掉落。因为H拉格纳罗斯难度较大所以直到开5.0版本的时候拥有着也不是很多,不过在5.0前夕的时候去击杀H模式拉格纳罗斯竟然还会100%掉落纯血火鹰,并且这个BUG持续了2周直到暴雪修复。
暴雪修复后并未收回坐骑,可以说是给广大玩家的福利了。虽然获得这个坐骑并没有光辉成就,不过因为纯血火鹰的样子十分漂亮还是吸引了大批玩家去刷。
第三名:老版祖阿曼4箱子坐骑BUG
TBC版本祖阿曼还是10人副本,难度高于祖阿曼。但是真正吸引人的是箱子模式,也就是在规定时间内击杀BOSS可以获得相应奖励。当在规定时间内击杀了第四个首领猫王后,就可以获得史诗坐骑:阿曼尼战熊,相比85级的阿曼斗熊颜色更深。
因为难度较大所以很多玩家想到了BUG,他们通过跳墙人物模型提前进入到副本内清理小怪,然后自杀敲门直接4BOSS从而获得坐骑。除了牛头和德莱尼体型较大外其他种族都可以卡成。暴雪虽然修复了1次但是无法战胜玩家的智慧。又想出了其他跳墙方式。
目前阿曼尼战熊已经绝版,成了很多老玩家的怨念。这只号称萌萌熊的坐骑在当时可以称的算高端玩家的象征了!拥有了阿曼尼战熊就好像坐骑毕业了一样。不在渴求其他坐骑。
第二名:怒之煞-神圣玛瑙翔龙必出BUG
MOP版本野外BOSS怒之煞有1/2000的几率掉落坐骑神圣玛瑙翔龙,这个坐骑外观十分漂亮。但由于掉率太低导致拥有它的玩家数量少的可怜。可有这么一段时间很多有了BUG。也许很多玩家还没反应过来这个BUG已经被修复。
这个BUG需要全团成员必须云端翔龙骑士团 至尊天神,声望达到崇拜999,缺一不可,期间不能有任何人,包括小号或者敌对阵营的干扰。
相信很多玩家都知道前段时间有四个玩家组队,所有人都获得了神圣玛瑙翔龙,仔细想想,如果不是这个原因又怎么可能四个人同时获得呢?
第一名:甲虫之王-魔兽史上唯一橙色坐骑BUG
60级这个版本,安琪拉副本开启后,每个服务器只能有一个人获得这个魔兽史上唯一的橙色坐骑:黑色其拉作战坦克!开门任务需要一大串任务加上极高的人脉才能获得的。当时那个年代只有服务器首杀公会的玩家才有机会获得!一般来讲不是公会MT就是公会的会长、团长。
但是在TBC这个阶段,当暴雪开启转服系统后,接近半年的时间,很多玩家在原服务器做完这一系列前置任务后直接到新服务器开门获得黑色其拉作战坦克!俗称黑虫子坐骑!
当年有千百名玩家转服到新服务器就为了获得这一坐骑!后来暴雪设置了新服务器开启后安琪拉自动开门才得以完结。
获得黑色其拉作战坦克的玩家还可以获得两个光辉成就和一个甲虫之王的头衔:
把他列为第一相信没有什么可说的,直到今天即便是通过这个BUG获得黑虫子坐骑的玩家也不多见了。暴雪并没有处理通过这个BUG获得坐骑的玩家。成就、坐骑都没有收回!这个福利相信以后不会在有了。
pokemmo11.25极速转载机翻附原文
Features
Updated movesets to Gen 8-3
This change encompasses several thousand changes and can not be summarized in the changelog. Please consult the in-game Dex for lists of moves
Added a client setting to show damage percentages dealt by moves in battle
By default, this option is only enabled for PvP matches. It may be changed in the Settings menu
Players can now click the ball in the Summary window to change a party member's ball type
Particle searches in the GTL/PC Advanced Search menus are now able to select from individual particles
Particles can now be previewed from the Inventory if you own the associated item
Normal duels now include options for: Level Scaling, Team Preview, and Turn Timers under the Advanced Options of duel creation
Level scaling in normal Duels now applies both upwards and downwards to the target level
Team Leaders may now customize the rank names of their Teams
Balancing
Draco Meteor has been removed from Hydreigon
Nasty Plot has been added to Hydreigon
Antidote, Burn Heal, Ice Heal, Awakening, and Parlyz Heal have had their 20HP healing effects removed
Antidote cost has been reduced from 600 to 200
Burn Heal, Ice Heal, Awakening, and Parlyz Heal costs have been reduced from 600 to 300
Changes
Desktop: Item hotkey bars can now always be moved around
General Bug Fixes
Fixed a client crash related to inventory screens
Fixed an issue where some NPCs would not render directionals (East/West) properly
Fixed an issue where, if spectating a matchmaking game while the spectator disconnected, they would not be able to reopen the matchmaking window after reconnecting
Fixed an issue where, if you blocked the current owner of an Overworld Legendary, you wouldn't be able to fight them
Fixed an issue where Sinnoh's early Elite 4 rematches would use Hoenn's E4 teams
Battle Bug Fixes
Fixed a battle crash related to Wonder Guard
This was previously hotfixed by making Wonder Guard fail in conditions which would lead to a crash. This change re-enables Wonder Guard for all usage
Fixed Sky Drop cancellation for Gravity
// This change re-enables Sky Drop for use
Fixed an issue where, in Multi-Battles, if all other targets of a move fainted, the last target would receive damage as-if it were the only target
Multi-Battles modify damage by 0.75x when attacking multiple targets, meaning the last target was receiving an extra 25% damage
Fixed an issue where, in Multi-Battles, when casting a move using a Gem, the Gem would not apply its power boost against all targets of the move
Fixed an issue where negative stat stage value modifiers would not round correctly
For example, Previously a non-Boss monster with 200 Speed, if affected by a -1 Speed stat stage, would calculate its stat as (200 * 0.67)=134 Speed. This change results in (200 * 0.6667)=(133.34(truncated) -> 133) Speed
When replacing fainted targets in Multi-Battles, ability activation is now queued until all have been replaced
Trick Room now affects the Speed stat, instead of action ordering
This was primarily an issue when handling actions which would have been sorted by Speed, such as the above change with ability activation queueing
Role Play will no longer trigger ability swap-in effects (e.g. Intimidate) if the move failed
Fixed an issue where Magic Coat would allow a Sleep Clause violation in some scenarios
Players are now forbidden from casting multiple non-Rest Sleep Effects (e.g. Spore) against their own team. This primarily affects Triple Battles, but manifested in Doubles with Magic Coat and an extreme edge case with Sleep Talk
rocky Helmet / Sticky Barb will no longer proc against moves which had just broken a Substitute (but will still activate on subsequent hits in multi-hit moves)
Substitutes now rendered in front of battle sprites, instead of replacing them
Fixed an issue where, during Multi-Battles, Nature Power would pick a default target after cast instead of the requested target
Updated Nature Power's move map to Gen 6 . The move map for Nature Power is now:
Caves -> Power Gem
Sand/Dirt -> Earth Power
Grass -> Energy Ball
Swamp -> Mud Bomb
Water/Underwater -> Hydro Pump
Default -> Tri-Attack
Copycat/Assist now share the same blacklist of moves which may be called.
Also, these moves may no longer call Fling
Me First now maintains a less restrictive blacklist of moves. This previously referenced Copycat's blacklist
Wonder Guard / Power Trick now respect base stats modified by Power Split/Guard Split
Fixed Wonder Guard / Power Trick application ordering
Previously, Wonder Guard / Power Trick would always proc in the order of Wonder Guard -> Power Trick. This ordering is now dynamic and leads to different results if using Wonder Guard->Power Trick vs Power Trick->Wonder Guard
Stat traversal now behaves as follows:
// Start->Power Trick->Wonder Room
Attack->Defense->SPDef
Def->Attack->Attack
SPDef->SPDef->Def
// Start->Wonder Room->Power Trick
Attack->Attack->Def
Def->SPDef->SPDef
SPDef->Def->Attack
Poison Touch no longer broadcasts its presence if the target would have been immune to poison
Tinted Lens no longer broadcasts its presence when modifying damage
Fixed an issue where excessive amounts of "Awaiting other player(s) actions..." broadcasts would occur during multi-battles
Fixed an issue where untradeable Toxic/Flame Orbs would render as {STRING_XXXXXX} during broadcasts
Fixed an issue where Ether-type items would give an error message when used in battle despite being used successfully
Fixed an issue where, during PvP which affects underlying parties (e.g. Overworld Legendary fights), if a player had fainted party members, their battle status ball displays would not reflect the fainted status
Inner Focus's ability display trigger will now only trigger against 100% chance Flinch moves
特性
更新到第8-3代
这个变更包含了几千个变更,不能在变更日志中进行总结。请查阅游戏中的敏捷列表
增加了一个客户端设置来显示战斗中移动的伤害百分比
默认情况下,该选项仅对PvP匹配启用。可在设置菜单中更改
玩家现在可以点击总结窗口中的球来改变队员的球的类型
在GTL/PC高级搜索菜单中的粒子搜索现在可以从单个粒子中选择
如果你拥有相关的物品,你现在可以从库存中预览粒子
普通的决斗现在在高级的决斗创建选项中包括:等级缩放,团队预览和回合计时器
普通决斗中的等级缩放现在可以向上或向下作用于目标等级
团队领导现在可以自定义他们团队的等级名称
平衡
天龙座流星已经从海德里根移除
在Hydreigon上添加了肮脏的阴谋
解药、燃烧治疗、冰治疗、觉醒和帕莱兹治疗的20点治疗效果被移除
解毒剂的价格从600降低到200
燃烧治疗,寒冰治疗,觉醒和帕莱兹治疗费用从600降低到300
变化
桌面:项目热键栏现在可以随时移动
一般的Bug修复
修正了一个与库存屏幕有关的客户崩溃
修正了一些npc不能正确渲染东西方向的问题
修正了一个问题,如果观看比赛时观众断开连接,他们将不能重新打开比赛窗口
修正了一个问题,如果你阻止了一个传奇的拥有者,你将不能与他们战斗
修正了Sinnoh早期的精英4重赛使用Hoenn的E4球队的问题
战斗Bug修复
修正了一个与神奇守卫有关的战斗崩溃
这之前是通过让Wonder Guard在可能导致撞车的情况下失效而热修复的。这一变化重新启用奇迹保护的所有使用
固定天空下降取消重力
//这个改变重新启用了Sky Drop
修正了一个问题,在多战斗中,如果所有其他移动目标昏倒,最后一个目标会受到伤害,就像它是唯一的目标一样
当攻击多个目标时增加0.75倍的伤害,这意味着最后一个目标将受到额外25%的伤害
修正了在多场战斗中使用宝石施放移动时,宝石不会对该移动的所有目标施放它的异能提升的问题
修正了负属性阶段值修正不能正确轮入的问题例如,以前一个拥有200速度的非boss怪物,如果受到-1速度属性的影响,它的属性会被计算为(200 * 0.67)=134速度。这一更改将导致(200 * 0.6667)=(133.34(截断)-> 133)的速度
当在多场战斗中替换昏倒的目标时,技能激活会排队直到所有的目标都被替换
暗室现在影响速度属性,而不是行动顺序
这主要是处理按速度排序的动作时的一个问题,比如上面的能力激活队列的变化
如果行动失败,角色扮演将不再触发技能交换效果(例如恐吓)
修正了魔法外套允许在某些场景下违反睡眠条款的问题
玩家现在禁止对自己的队伍使用多个非休息睡眠效果(例如孢子)。这主要影响三重战斗,但在双打与魔法外套和极端的边缘情况下与睡眠谈话
岩石头盔/粘性倒钩将不再对破坏替代者的动作触发(但仍然会在后续的多次攻击中激活)
在战斗精灵面前呈现的替代物,而不是原来的替代物
修正了在多场战斗中,自然之力在施放后会选择一个预设目标而不是请求的目标
更新自然之力到第6代 的移动地图。《自然力量》的移动地图是:
洞穴->能量宝石
砂/土->土动力
草->能量球
沼泽->泥浆炸弹
水/水下->泵
默认- > Tri-Attack
模仿/协助现在共享相同的被称为的动作黑名单。
此外,这些举措可能不再被称为“逢场作戏”
“我首先”(Me First)现在保留了一个限制较少的行动黑名单。这之前引用了山寨的黑名单
神奇守卫/灵能技能现在尊重基础属性被异能分裂/守卫分裂修改
修正了神奇守卫/异能技巧应用顺序
以前,神奇守卫/异能技能的触发顺序总是与神奇守卫->异能技能相同。这个顺序现在是动态的,如果使用神奇守卫->异能技巧和>神奇守卫异能技巧会产生不同的结果
属性遍历现在行为如下:
//启动->能量魔术->神奇屋
攻击- >防御- > SPDef
Def - > - >攻击的攻击
SPDef - > SPDef - > Def
//启动->Wonder Room->异能技巧
攻击- > - > Def的攻击
Def - > SPDef - > SPDef
SPDef - > Def - >攻击
如果目标对毒药免疫,毒药触摸将不再传播它的存在
有色镜片不再广播其存在时,修改伤害
修正了在多场战斗中播放过多的“等待其他玩家的动作…”的问题
修正了不可交易的有毒/火焰球体在广播时呈现为{STRING_XXXXXX}的问题
修正了在战斗中使用ethertype的物品即使成功使用也会显示错误信息的问题
修正了一个在PvP期间会影响其他玩家的问题(例如:Overworld传奇战斗),如果玩家晕倒了队友,他们的战斗状态球将不会显示晕倒状态
心灵专注的技能显示触发现在只会对100%几率的退缩移动触发
爱可可AI论文推介11月12日
LG - 机器学习 CV - 计算机视觉 CL - 计算与语言 AS - 音频与语音 RO - 机器人
(*表示值得重点关注)
1、[AS] *Wave-Tacotron: Spectrogram-free end-to-end text-to-speech synthesis
R J. Weiss, R Skerry-Ryan, E Battenberg, S Mariooryad, D P. Kingma
[Google Research]
Wave-Tacotron:免声谱端到端文本-语音合成。提出一种端到端(归一化)文本-语音波形合成模型,将归一化流纳入自回归Tacotron解码器环路。Wave-Tacotron以文本为条件直接生成高质量的语音波形,使用单一模型,无需单独的声码器,其训练不需要手工设计声谱图或其他中间特征上复杂损失的估计,只需要训练数据上的最大似然。该混合模型结构结合了基于注意力的TTS模型的简单性和归一化流的并行生成能力,可直接生成波形样本。实验表明,该模型生成的语音质量接近于最先进的神经网络TTS系统,生成速度显著提高。
We describe a sequence-to-sequence neural network which can directly generate speech waveforms from text inputs. The architecture extends the Tacotron model by incorporating a normalizing flow into the autoregressive decoder loop. Output waveforms are modeled as a sequence of non-overlapping fixed-length frames, each one containing hundreds of samples. The interdependencies of waveform samples within each frame are modeled using the normalizing flow, enabling parallel training and synthesis. Longer-term dependencies are handled autoregressively by conditioning each flow on preceding frames. This model can be optimized directly with maximum likelihood, without using intermediate, hand-designed features nor additional loss terms. Contemporary state-of-the-art text-to-speech (TTS) systems use a cascade of separately learned models: one (such as Tacotron) which generates intermediate features (such as spectrograms) from text, followed by a vocoder (such as WaveRNN) which generates waveform samples from the intermediate features. The proposed system, in contrast, does not use a fixed intermediate representation, and learns all parameters end-to-end. Experiments show that the proposed model generates speech with quality approaching a state-of-the-art neural TTS system, with significantly improved generation speed.
https://weibo.com/1402400261/Jtv0NdAWq
2、[CV] *An Attack on InstaHide: Is Private Learning Possible with Instance Encoding?
N Carlini, S Deng, S Garg, S Jha, S Mahloujifar, M Mahmoody, S Song, A Thakurta, F Tramer
[Google & Columbia University & UC Berkeley]
针对InstaHide的隐私重建攻击。InstaHide通过一种编码机制来保护数据隐私,在被学习之前会修改输入信息,以防生成模型泄漏其训练集的隐私信息。本文提出一种针对nstaHide的重建攻击,用编码图像恢复原始图像的视觉可识别版本。该攻击是有效和高效的,根据实证可在CIFAR-10、CIFAR-100和最近发布的InstaHide挑战上破解InstaHide。通过实例编码进一步形式化了各种隐私学习概念,研究了实现这些概念的可能性。
A learning algorithm is private if the produced model does not reveal (too much) about its training set. InstaHide [Huang, Song, Li, Arora, ICML'20] is a recent proposal that claims to preserve privacy by an encoding mechanism that modifies the inputs before being processed by the normal learner.We present a reconstruction attack on InstaHide that is able to use the encoded images to recover visually recognizable versions of the original images. Our attack is effective and efficient, and empirically breaks InstaHide on CIFAR-10, CIFAR-100, and the recently released InstaHide Challenge.We further formalize various privacy notions of learning through instance encoding and investigate the possibility of achieving these notions. We prove barriers against achieving (indistinguishability based notions of) privacy through any learning protocol that uses instance encoding.
https://weibo.com/1402400261/Jtv84dhBE
3、[LG] Deep Reinforcement Learning for Navigation in AAA Video Games
E Alonso, M Peter, D Goumard, J Romoff
[Ubisoft La Forge]
基于深度强化学习的AAA游戏漫游。无需导航网格(NavMesh),用深度强化学习来学习如何在大规模3D地图上漫游,以解决游戏中非玩家角色(NPC)的自由移动问题。在Unity游戏引擎中复杂的3D环境中进行了测试,这些环境比深强化学习文献中通常使用的地图大一个数量级,其中一种地图是直接模仿育碧AAA级游戏制作的。实验表明,该方法执行效果很好,在所有测试场景中都达到了至少90%的成功率。
In video games, non-player characters (NPCs) are used to enhance the players' experience in a variety of ways, e.g., as enemies, allies, or innocent bystanders. A crucial component of NPCs is navigation, which allows them to move from one point to another on the map. The most popular approach for NPC navigation in the video game industry is to use a navigation mesh (NavMesh), which is a graph representation of the map, with nodes and edges indicating traversable areas. Unfortunately, complex navigation abilities that extend the character's capacity for movement, e.g., grappling hooks, jetpacks, teleportation, or double-jumps, increases the complexity of the NavMesh, making it intractable in many practical scenarios. Game designers are thus constrained to only add abilities that can be handled by a NavMesh if they want to have NPC navigation. As an alternative, we propose to use Deep Reinforcement Learning (Deep RL) to learn how to navigate 3D maps using any navigation ability. We test our approach on complex 3D environments in the Unity game engine that are notably an order of magnitude larger than maps typically used in the Deep RL literature. One of these maps is directly modeled after a Ubisoft AAA game. We find that our approach performs surprisingly well, achieving at least 90% success rate on all tested scenarios. A video of our results is available at this https URL.
https://weibo.com/1402400261/JtvdtjCP2
4、[CL] Weakly- and Semi-supervised Evidence Extraction
D Pruthi, B Dhingra, G Neubig, Z C. Lipton
[CMU]
弱监督和半监督证据提取。试图阐明“预测背后的原因”,为预测补充证据,用来验证预测的正确性。提出了联合建模文本分类和证据序列标记任务的方法,将少量证据标记(强半监督)与大量文档级标签(弱监督)相结合进行证据提取。实验表明,在“先分类再提取”的框架下,根据预测标签对证据提取进行调整,可以提高基线性能,只需要几百个证据标记就能带来实质性的增益。
For many prediction tasks, stakeholders desire not only predictions but also supporting evidence that a human can use to verify its correctness. However, in practice, additional annotations marking supporting evidence may only be available for a minority of training examples (if available at all). In this paper, we propose new methods to combine few evidence annotations (strong semi-supervision) with abundant document-level labels (weak supervision) for the task of evidence extraction. Evaluating on two classification tasks that feature evidence annotations, we find that our methods outperform baselines adapted from the interpretability literature to our task. Our approach yields substantial gains with as few as hundred evidence annotations. Code and datasets to reproduce our work are available at this https URL.
https://weibo.com/1402400261/Jtvid1N5m
5、[LG] Generative Neurosymbolic Machines
J Jiang, S Ahn
[Rutgers University]
生成式神经符号机(GNM),在生成式潜变量模型中,结合了分布式表示和符号表示的优点,不仅提供可解释的、模块化的、组合的结构化符号表示,还可以根据观测数据密度生成图像,这是建模世界的关键能力。实验表明,该模型在学习生成清晰的图像和遵循观察结构密度的复杂场景结构显著优于基线。
Reconciling symbolic and distributed representations is a crucial challenge that can potentially resolve the limitations of current deep learning. Remarkable advances in this direction have been achieved recently via generative object-centric representation models. While learning a recognition model that infers object-centric symbolic representations like bounding boxes from raw images in an unsupervised way, no such model can provide another important ability of a generative model, i.e., generating (sampling) according to the structure of learned world density. In this paper, we propose Generative Neurosymbolic Machines, a generative model that combines the benefits of distributed and symbolic representations to support both structured representations of symbolic components and density-based generation. These two crucial properties are achieved by a two-layer latent hierarchy with the global distributed latent for flexible density modeling and the structured symbolic latent map. To increase the model flexibility in this hierarchical structure, we also propose the StructDRAW prior. In experiments, we show that the proposed model significantly outperforms the previous structured representation models as well as the state-of-the-art non-structured generative models in terms of both structure accuracy and image generation quality.
https://weibo.com/1402400261/JtvnowoQ7
其他几篇值得关注的论文:
[LG] The power of quantum neural networks
量子神经网络
A Abbas, D Sutter, C Zoufal, A Lucchi, A Figalli, S Woerner
[IBM Research & ETH Zurich]
https://weibo.com/1402400261/JtvqUj9tN
[LG] Deep Learning is Singular, and That's Good
奇异学习理论与深度学习
D Murfet, S Wei, M Gong, H Li, J Gell-Redman, T Quella
[University of Melbourne]
https://weibo.com/1402400261/Jtvs5xKMX
[LG] Function Contrastive Learning of Transferable Representations
可迁移表示的函数对比学习
M W Gondal, S Joshi, N Rahaman, S Bauer, M Wüthrich, B Schölkopf
[Max Planck Institute for Intelligent Systems]
https://weibo.com/1402400261/JtvuurgFj
[CL] When Do You Need Billions of Words of Pretraining Data?
要得到效果不错的预训练语言模型至少需要多少词
Y Zhang, A Warstadt, H Li, S R. Bowman
[New York University]
https://weibo.com/1402400261/JtvwE3oMz
[CV] CompressAI: a PyTorch library and evaluation platform for end-to-end compression research
CompressAI:用于端到端压缩研究的PyTorch库和评价平台
J Bégaint, F Racapé, S Feltman, A Pushparaja
[InterDigital AI Lab]
https://weibo.com/1402400261/JtvF2lCWc
[LG] Margins are Insufficient for Explaining Gradient Boosting
A Grønlund, L Kamma, K G Larsen
[Aarhus University]
https://weibo.com/1402400261/JtvGNFpFN
Machine Learning Is Making Video Game Characters Smarter
For years, video game developers have used artificial intelligence to animate those characters encountered by a player, but non-playable characters, or NPCs, have been based on sets of rules coded by humans. Using the AI technology du jour, machine learning, future NPCs will program and reprogram their own rules, based on the experiences they encounter in games, in the process getting smarter the longer they play.
So says Danny Lange, the VP of AI and machine learning at Unity Technologies, a major maker of game “engine” software that handles the underlying mechanics of titles like Firewatch and ChronoBlade. Today the company announced Unity Machine Learning Agents—open-source software linking its game engine to machine learning programs such as Google’s TensorFlow. It will allow non-playable characters, through trial and error, to develop better, more creative strategies than a human could program, says Lange, using a branch of machine learning called deep reinforcement learning.
Unity’s new AI-linking tool isn’t confined to virtual characters. The software can also speed up the development of real-life robots, like self-driving cars, says Lange, by training them relentlessly in sprawling, computer-generated—but lifelike—virtual landscapes.
Unity used machine learning to devise strategies by assessing scenes from multiple angles—a birds-eye view (left) and a first-person perspective (right)—in this unreleased tank battle game.
Unity didn’t invent these technologies, but it’s made them easier to use, says the company. Google’s DeepMind, for instance, has used deep reinforcement learning to teach AI agents to play 1980s video games like Breakout, and, in part, to master the notoriously challenging ancient Chinese game Go.
There are also many examples of training self-driving systems in game-like environments. MSC Software’s Virtual Test Drive application provides simulations for car training. Games like The Open Racing Car Simulator and Euro Truck Simulator 2 are also being used for virtual training of autonomous cars. And Nvidia’s new Isaac Lab uses rival Epic Games’ Unreal Engine to generate lifelike virtual environments for training the algorithms that control actual robots.
Lange promises that the new ML-Agents tools, now available in beta on GitHub, will eliminate days or even weeks of hacking together links between a game engine and AI software. “What we’re trying to do here is get to that point within an hour,” he says, making it easier for more people to experiment with developing a better game character or training a robot.
SMARTER GAMES
Unity showed an example of deep reinforcement learning’s potential earlier this year with a simplified knock-off of the Unity-based mobile game Crossy Road, itself a knock-off of 1980s hit Frogger.
A chicken has to cross an endlessly wide road, gaining a point every time it hits a gift box and losing a point every time it runs into a truck. With the mandate to maximize the score, the learning process begins.
At first, the chicken flits around like a drunken moth, going backwards and forwards and colliding into gifts and trucks with equal intensity. After a few hours of trial and error, coupled with machine learning to identify the best tactics, the bird sails through the game with godlike power.
More complex non-playable characters could be trained on subtler goals, says Lange, such as maximizing playtime for the humans in a first-person shooter game.
“It will probably develop some strategy where it’s going to show itself in surprising ways, and you’re going to chase it, but you won’t catch it, and it won’t kill you right away,” says Lange. “You open the door for more creative behavior, which you could not possibly even imagine; or it would be very, very labor intensive to implement in traditional code.”
Don’t expect such autodidact virtual opponents soon. Building NPCs with deep reinforcement learning is still a science experiment for academics and tech company research teams. But the process might speed up if Unity’s ML-Agents make it easier for its millions of registered developers, even those without big budgets, to experiment.
SMARTER ROBOTS
Video game engines like Unity and Unreal can now model real-world physics with extreme precision. From the interplay of light and landscape to the friction between a rubber tire and concrete road, games provide virtual environments that are accurate enough to train a real-world robot.
Using a process called procedural rendering, a game engine can synthesize, on the fly, essentially unlimited miles of photo-realistic road to traverse. Machine learning software analyzes the video feeds from games and learns how to accurately interpret what it sees.
Related: Data Will Be Oil For Self-Driving Cars So These Humans Are Mining It Now
“It’s very similar to when you have a vehicle driving around in San Francisco capturing that on video,” says Lange, who was head of machine learning at Uber before he left for Unity in December 2016. “But the Uber guys, what we would have to do is go home and hire contractors to label that video data.” People have to tag every tree, car, pedestrian, sidewalk, lane divider, etc., so the learning software knows what it’s seeing and develops techniques to recognize them. In virtual training, every object in a scene is already labeled because software like Unity or Unreal generated a photo-realistic version of it.
Autonomous cars are giant tech projects right now—straining even the resources of major carmakers and Silicon Valley companies. But as products like Unity make it easier for small-time game developers to get started, Unity’s ML-Agents might enable more small-time robot and robot-car developers, too.
很多人在玩游戏的时候,都会感觉到游戏中的NPC非常“脑残”。但这一情况将会发生改变了。前不久,Unity推出了一个强大的开源软件Unity Machine Learning Agents,通过这个开源软件,能够将其游戏引擎与机器学习程序连接起来。通过深度强化学习算法,非玩家角色(NPC)可以不断地进行尝试和犯错,变得更有创造性和策略性,从而增加游戏的对抗性和可玩性。也就是说,游戏中的NPC将会变得越来越聪明。更加值得关注的是,这个开源软件的适用范围并不仅限于游戏中的虚拟人物,还可以用来训练那些用于真实世界的机器人。文章发表在FastCompany,由36氪编译。
这些年,视频游戏开发者已经在使用人工智能来优化玩家能够操纵的游戏角色了。但玩家不能操纵的角色,游戏中的NPC,都是基于人类编写的规则,在游戏体验上并不完美,玩家很容易就能掌握了其中的规律,从而会感觉到游戏非常无聊。
不过,这一切都将会发生改变。利用人工智能技术,游戏中的NPC将会根据游戏中遇到的情况,对自己的游戏规则进行编程和重新编程。也就是说,他们在游戏中的时间越长,就会越聪明。
前不久,Unity推出了一个强大的开源软件Unity Machine Learning Agents,通过这个开源软件,能够将其游戏引擎与机器学习程序(比如谷歌的TensorFlow)连接起来。Unity是游戏引擎软件的主要制造商,王者荣耀、纪念碑谷、神庙逃亡2、Pokémon GO都用的是它提供的游戏引擎。
Unity负责人工智能和机器学习的副总裁Danny Lange表示,通过深度强化学习算法,非玩家角色(NPC)可以不断地进行尝试和犯错,变得更有创造性和策略性,从而增加游戏的对抗性和可玩性。
不过,Unity的新AI-linking工具的使用场景并不局限于游戏中的虚拟角色。Lange说,这种软件还可以加速机器人的发展,比如自动驾驶汽车,可以让它们在“杂乱无章的、计算机生成的”的场景中不间断的训练,使其变得更加智能。
Unity利用机器学习技术,通过从多个角度评估场景——鸟-眼(左)和第一人称视角(右)——在这个尚未发布的坦克战斗游戏中设计出了一个策略。
Unity也坦然表示,他们并没有发明这些技术,只是让这些技术变得更容易使用。比如说,谷歌旗下的DeepMind利用深度强化学习技术,教AI玩儿上世纪80年代的电子游戏,比如“Breakout”。
在游戏式的环境中,训练自动驾驶系统也有很多的例子。比如说,MSC Software的虚拟测试驱动器应用程序提供汽车培训模拟。像The Open Racing Car Simulator和Euro Truck Simulator 2这样的游戏也被用于自动驾驶汽车的虚拟训练。而Nvidia的New Lsaac Lab则使用竞争对手Epic Games的“虚拟引擎”来生成逼真的虚拟环境,用于训练那些控制真实机器人的算法。
Lange表示,现在可以在GitHub上使用测试版的新ML-Agents工具,可以让游戏引擎和AI软件之间的链接不再动辄就要消耗数天甚至数周的时间。他说:“我们想做的是在一小时内达到这一目标。”很显然,这会让更多的人更容易尝试开发出更好的游戏角色,或者把训练机器人这一枯燥的事情变得更容易。
智能游戏
今年早些时候,Unity展示了深度强化学习的潜力,它在一个基于Unity的移动游戏Crossy Road上做了实验。在游戏中,一只鸡必须穿过一条无穷无尽的道路,每次它撞到一个礼盒,就能获得一分,但每次它撞上一辆卡车,就会失去一分。在让分数最大化的任务的下,就开始了学习的过程。
起初,这只鸡像一只喝醉了的蛾子一样四处乱飞,前后移动,不断与礼物和卡车相撞。
经过几个小时的反复试验,再加上机器学习识别出最佳战术,这只鸟以“神一般的力量”在游戏中穿行。
更复杂的NPC可以训练在更微妙的目标上,比如在第一人称射击游戏中最大化玩家的游戏时间。
Lange说:“它可能会发展出一些策略,以令人惊讶的方式展示自己。比如说,它会引诱你去追逐它,但你不会抓住它,它也不会马上杀死你。通过这样的方式来提高你在游戏中的时间。很显然,这为更有创造性的行为打开了一扇门,这是你可能想象不到的;或者说,以传统的代码水平而言,让NPC执行这样的策略会非常非常费力。”
不过,也不要别指望这种能够自学成才的虚拟对手很快就会出现。对于学者和科技公司的研究团队来说,建立具有深度强化学习的NPC仍然是一项科学实验。但如果Unity的ML-Agents能让数百万注册开发者(即使是没有很多资金的用户)更方便地进行试验,那么这个过程可能会加快。
智能机器人
像Unity和Unreal 这样的视频游戏引擎现在可以非常精确地模拟现实世界。从光与景观的相互作用,到橡胶轮胎与水泥路之间的摩擦,游戏提供的虚拟环境已经非常精确了,可以用来训练那些用于真实世界的机器人。
通过一种叫做过程渲染(procedural rendering)的方法,游戏引擎可以动态地合成一条几乎无限里程而且非常逼真的道路。通过机器学习软件可以从分析游戏中反馈的视频,并学习如何准确地解释它所看到的内容。
Lange说:“这与你在旧金山开车行驶在道路上很像。”他在2016年12月加入Unity之前,是Uber的机器学习主管。“但Uber的员工,他们需要做的就是回家,然后找外包公司来给这些视频数据贴上标签。”他们必须对每棵树、汽车、行人、人行道、车道分隔器等物体进行标记,只有这样,学习软件才能知道它看到了什么,并开发出识别它们的技术。在虚拟场景的训练中,因为像Unity或Unreal这样的软件是根据现实世界生成的非常逼真的场景,其中的每一个物体都已经贴上了标签。
自动驾驶汽车目前是一项巨大的技术项目,甚至连主要汽车制造商和硅谷的公司都在往其中投入大量的资源。但随着Unity等产品的推出,小型游戏开发者更容易上手,Unity的ML-Agents也可以为更多的小型机器人和机器人开发者提供支持。
小岛秀夫谈游戏设计 暗示死亡搁浅精打细磨
经常在推特发美食图片的小岛秀夫今天忽然连着发了好几条消息,谈论自己对游戏设计的看法,虽然其中并没有谈到他的《死亡搁浅》,但是显然这款游戏会按照他的这些思路制作下去。我们不妨看看这位业内高人是怎么设计游戏的。
以下是推特原文:
"Game creation is different from film making. Let’s say we imagine a hallway the player is meant to walk down according to the game design. The hallway has meaning in the plot as well as the game design. Is the purpose to deliver the story, to practice the controls, to show the scenery, or to add rhythm to the game play? A variety of possibilities exist.
“游戏制作不同于电影制作。让我们拿游戏中的‘走廊’举例。走廊既有情节意义,又有游戏设计。是为了传递故事,练习操作,展示风景,还是为游戏增添节奏?存在很多种可能性。”
As the game development proceeds, the details need to be fleshed out. How about the lighting, the walls of the hallway, how long is it and how high is the ceiling?
“随着游戏开发的进展,细节需要充实。灯光怎么样?走廊的墙壁怎么样?走廊有多长?天花板有多高?”
Can doors be opened? Who else walks down the hallway? How does player feel at this moment in the game? There is a never ending stream of revisions based on the plot, gameplay, the map layout, as well as dealing with technical hurdles.
“门能开吗?还有谁沿着走廊走?玩家在游戏中的感觉如何?根据情节、玩法、地图布局以及处理技术难题。这是一个永无止境的修改过程。”
There are other various details to consider, like adding a crank turn to the hallway, is it possible to add NPCs, how to fix poor gameplay tempo, making the characters stand out, or even whether to show the ceiling in cutscenes.
“ 还有其他的各种细节问题需要考虑,如添加一个有转弯的走廊,这里可能会增加NPC吗?如何解决糟糕的游戏节奏使人物脱颖而出?甚至是否显示在过场动画的天花板? ”
Almost everyday revisions are made depending on the point in the game development process. An action game can never be completed by ordering from a blueprint and assembling parts off a factory line.
“几乎每天都会根据游戏开发过程中的要点进行修改。一个动作游戏永远无法通过从图纸设计和工厂流水线来完成。”
If decision making and supervision are delayed, production efficiency drops, and that leads to redoing work. In order to avoid this trap, one must make small daily adjustment on site while creating the game. When everything is outsourced, the parts that come back just don't fit together. That is why it's important to take charge of the little details every day.
“如果决策和监督滞后,生产效率下降,并导致返工。为了避免这种状况,你必须在制作游戏的同时对现场进行每日的小调整。你如果把东西送出去外包,外包做的东西和你做的东西总是大不一样。这就是为什么每天都要掌握一些小细节是很重要的。”
The feeling of gameplay in a single hallway, the concept, the visuals, the controls, the story hints, the map, the sound, the directions, all those are important to the overall game. Scripts and gimmicks change everyday.
“在一条走廊里的游戏体验、概念、视觉、操作、故事提示、地图、声效、方向,所有这些都是组成游戏的重要部分。剧本和技巧每天都在变化”
This is what it means to make games, a process completely different from the concept ->script->game design->preproduction ->shooting->postproduction process of film."
“拍电影的过程是这样的:概念->脚本->游戏设计->试制->拍摄->后期制作。而做游戏跟拍电影完全不一样。”
根据小岛秀夫此前在推特上的暗示,《死亡搁浅》预计在2018年年内推出,PS4平台独占。游戏将有多位明星出演: 麦斯·米科尔森 (《汉尼拔》)、 吉尔摩·德尔·托罗 (《潘神的迷宫》)、 诺曼·李杜斯 (《行尸走肉》)。