Wikisource:Scriptorium/Archives/2016-02

From Wikisource
Jump to navigation Jump to search
Warning Please do not post any new comments on this page.
This is a discussion archive first created in , although the comments contained were likely posted before and after this date.
See current discussion or the archives index.

Announcements

Achievements in 2015

In 2015 we have added 18,179 scan-backed pages to the mainspace. We also reduced non-scanned by a net of 884. This puts us at 35.8% scan-backed.

In the Page: namespace, we validated 48,520 and proofread a further 71,145 pages. We have also increased the problematic pages by a net of 6172. This puts us at 47.8% pages in either proofread or validated status.

In the Index: namespace, we moved 244 works to "done" status, which takes us to a total of 1,894 completed works. The Proofread of the Month process has seen 21 works taken through the complete process from uploading to validated across the year.

Well done, everyone. Beeswaxcandle (talk) 19:27, 1 January 2016 (UTC)

Proposals

Wikimedia Foundation seeking feedback about message notifications

In a recent email to the volunteers who act as communicators to communities, Quiddity, one of the developer team's community liaison, asked for interested parties to provide feedback about the personal message notifications

The overall task is: Deciding how to sort the notification types (e.g. "new usertalkpage message", "your edit was reverted", "a page you created has been linked to", "thanks", etc) into 2 groups. The current sorting has some problems. There are 2 more logical alternatives which the team is trying to decide between, and wants your feedback on (your preferences, or concerns). …

If there are interested users who have an opinion, and would like to assist in the development of better notifications, then please read "Sorting schemes" and contribute to the discussion at the places linked from that page. — billinghurst sDrewth 11:40, 27 January 2016 (UTC)

Bot approval requests

Help

Repairs (and moves)

Other discussions

Checker

Hi. Is there any volunteer (I guess javascript) savvy who could right a gadget which makes some basic checks when a proofread page is saved? I am scanning PSM and I am finding many trivial mistakes both in proofread and validated pages (e.g. punctuation like ,, or ;, and so on). I think that with a minimal library of common mistakes and a warning raised asking the user to recheck and accept when saving such pages, a lot of errors could be intercepted. Unluckily I do not have this kind of skill set ...— Mpaa (talk) 21:12, 16 December 2015 (UTC)

No particular skills, though couldn't we get a bot to run through the pages on a regular basis (per project) and record those that have the characters of concern. The first time that the bot is set up, it would have to check from an origin date, though for subsequent runs it would just be checking for pages in the project space that have changed since last the previous run. This idea seems to fit into the space around the discussion around bot fixing of typos, and it would seem that we need to get some discussions happening in that field. — billinghurst sDrewth 01:49, 18 December 2015 (UTC)
User:Charles Matthews does a manual find and replace; maybe a semi automated tool-set for ocr clean-up. the errors may be different for each scan. a big picture solution would be machine learning ocr. Slowking4RAN's revenge 04:00, 18 December 2015 (UTC)
A few comments. Some use AutoWikiBrowser. There are ideas around a spell-checker, and also around a "captcha" micro-contribution scenario. I do collect lists of typos and likely typos (two different animals). The check-on-save idea is new to me, worth running past an expert.
The general take is that the djvu files we use are not ideal for proofing automation of this type: their creation flattens out too much information about where words are on the page.
I was thinking about a surrogate for word position, namely tagging each sentence on the page with a percentage. Say 23% for "this sentence starts roughly 23% of the way down the scanned page" (tacit: judged by proportion of text). This would allow viewing the sentence against a close-up of the scan.
This sort of pre-tagging might work best for a "Proofread of the Month" approach: process one text and make it easy for people to make micro-contributions to validation. Charles Matthews (talk) 06:47, 18 December 2015 (UTC)
Check-on-save would have been a better name ... my point was exactly this, so there is an immediate corrective action taken. This is not in contrast also with the bot billinghurst mentioned, which can also be done (it is what I am currently doing on PSM, Ineuw might have seen me following his footprints ...).— Mpaa (talk) 16:39, 18 December 2015 (UTC)
I have asked if TemplateScript has capacity to be used for an OCR text error checker in some form. — billinghurst sDrewth 10:48, 21 December 2015 (UTC)
Might I suggest this is (subtly) the wrong approach to take?

Obviously there are a lot of variants on the checker script out there on the form of "reformat this page" scripts (not least Special:TemplateScript's "Clean up OCR" option.) This question really is asking for a reminder when such a script is not executed before saving the results of a given edit.

Would it be smarter to ask for a script to analyse the contents of <div id="content">(in hindsight an exceptionally bad choice of anchor point!) on any page viewed and highlight candidates for further improvement?

Even better yet if some mechanism is worked out whereby editors may submit suggestions for rules for observed "common typos" to be incorporated into the proposed scanning gadget? AuFCL (talk) 00:52, 23 December 2015 (UTC)

Building up a checking "dictionary" in a tool is my thought also. Over time it would become a valuable asset that could be used in further tools. There is plenty of scope for highlighting intelligently selected words or phrases in text, using colour. In our texts "vear" is almost certainly an OCR error for "year", for example, and this one y -> v switch can be mechanically applied, e.g. to give "nearlv". So no shortage at all of ways to generate plausible typo-correction. One tool on which to do such work would be a big advance. Charles Matthews (talk) 09:24, 24 December 2015 (UTC)
I make no claims this script is infallible (quite on the contrary Ineuw can vouch for how disastrous was an earlier attempt!) however as a starting point installing this:
function HighlightTyposLike( pattern, emphasis ){
	var content=document.getElementById('wikiPreview'); //presume currently editing page (must not touch unsafe structures like wpEditToken!)

	if(!content){
		content=document.getElementById('mw-content-text'); //user not currently editing: assume safe to address entire display region
	}

	if( content ){
		var typo=content.innerHTML.match( pattern, emphasis);

		for(var each = 0; each < typo.length; each++) {
		    content.innerHTML = content.innerHTML.replace( typo[ each ], '<span style="' + emphasis + '">' + typo[ each ] + '</span>' );
		}
	}
}

jQuery( document ).ready(
	HighlightTyposLike(/(\\\S+|;[.,:]+|,[.,;:]+|:[.,;]+|modem|nearlv|vear)/gm,'outline:5px double red;')
);
—somewhere into your special:mypage/common.js or equivalent will henceforth put an ugly red box around matched typos. I repeat this is but a starting proposal: further expertise will be required to flesh out useful typo-matching patterns; and/or properly gadgetise the thing (if of course the concept meets approval to progress further) and finally to make it more sensitive to things like CSS which it should not tinker with (the current code fools around with things it really should not touch and should be considered potentially mad, bad and somewhat dangerous to know.) AuFCL (talk) 10:23, 24 December 2015 (UTC)

┌────────────────┘

fwiw... the beginnings for a mechanism that might have displayed such "typo matches" with some further development was in place at one point but since it never handled more than "manual" (template) driven "typos", it was abandoned entirely.

As for the Checker idea itself - sounds like a solution for sloppy proofreading &/or "lazy" proofreaders to me but I'm not opposed to having such a feature available. -- George Orwell III (talk) 01:47, 29 December 2015 (UTC)

I was blissfully ignorant of MediaWiki:Corrections.js up until now. Looks like a lot of what I was trying to do has been done better before... and I share many of your misgivings. AuFCL (talk) 02:10, 29 December 2015 (UTC)
I wouldn't say pr-typos was "better" than anything being proposed here (or elsewhere for that matter). Its just another WS facet sold into acceptance & implementation promising things it never delivered upon by the time development stalled or was abandoned. I, for one, much rather have fixed the 50 or so "questionable template usages" and kept the feature active if not just hidden from use; I was in the minority however.

I think the nuance being missed here is those aren't typos but OCR errors that were made permanent upon page creation and missed on subsequent proofreading and or validation. If we want to prevent OCR errors from becoming hosted content upon Page: creation, the source file's text-layer should be tidied prior to uploading (imo). If we want to fix OCR errors made permanent typos after creation, we should easily find a way to do this with existing bits and pieces of RegEx search & replace functionality floating around MediaWiki and standardize a means to run either 'most-frequent' matches according to volumes & volumes of works we've transcribed to date or run a 'customized list' of matches per individual work/serial as needed.

And the primary reason this is unlikely to happen is due to most everything pertinent to making that a reality is built on a premise and coding that is outdated if not inflexible in its current incarnations. Plus when the opportunity to rectify just some of ills that continue to plague us presents itself (like the Wishlist survey and Dev summit earlier this month), the majority of folks do not / did not seem willing to voice their opinions or provide their support when it could have helped improve our situation. -- George Orwell III (talk) 03:26, 29 December 2015 (UTC)

For what it's worth, I am using AuFCL's latest version of script and very happy with it. It's simple, no fuss and no complications and quite sophisticated. The only additional feature I was about to ask him, is to add a Regex search for text with characters mixed with numbers, as in dates (lG89). I realize that there may be "false positives", which is also good, depending on how looks at it, because it forces an editor to notice it.
Additionally, install multiple dictionaries like UK English and USA English. In Firefox, There is a new extension "WiktionaryMultiLanguageDictionary" which looks up highlighted words. Since Wiktionary must also use Public Domain sources, I estimate that about 90% of words, no longer used these days, exist there and their editors are doing a very good job.
As for GO3's superior proofreading skills and comments about sloppy proofreading, I withhold my comments except to say that everyone makes mistakes, and eating Hungarian goulash made with sweet Hungarian paprika does not a good proofreader make. :-) — Ineuw talk 21:42, 19 January 2016 (UTC)

Wikimania 2016: call for proposals is open!

Dear users,
the call for proposals for Wikimania 2016 is open! All the members of the Wikimedia projects, researchers and observers are invited to propose a critical issue to be included in the programme of the conference, which will be held in Italy, in Esino Lario, from June 21 to 28.
Through this call we only accept what we call critical issues, i.e. proposals aiming at presenting problems, possible solutions and critical analysis about Wikimedia projects and activities in 18 minutes. These proposals do not need to target newbies, and they can assume attendees to already have a background knowledge on a topic (community, tech, outreach, policies...).
To submit a presentation, please refer to the Submissions page on the Wikimania 2016 website. Deadline for submitting proposals is 7th January 2016 and the selection of these proposals will be through a blind peer-reviewed process. Looking forward to your proposals. --Yiyi (talk) 10:21, 19 December 2015 (UTC)

The deadline for the call for proposals for Wikimania 2016 has been moved on 17th January 2016, so you have 10 days to submit you proposal(s). To submit a presentation, please refer to the Submissions page on the Wikimania 2016 website. --Yiyi (talk) 09:33, 7 January 2016 (UTC)

Copyright query

Work: - https://archive.org/details/scottishparliame00terruoft Author: Charles Sanford Terry, (1864-1936) Publication Date: 1905.

The scans appear to be a Glasgow edition, and I doubt there was a US edition?

Can someone confirm it's OK for Commons? (if not a local upload will suffice)ShakespeareFan00 (talk) 22:36, 31 December 2015 (UTC)

If there was no US edition within a month, my understanding is that it will still be under copyright in the UK, and therefore unsuitable for Commons. The UK copyright law extends protection until 95 years after the author's death. Commons licensing templates play it safe, and require >100 years after death in this situation. However, it could be uploaded locally, and housed here until 2037, if need be, since it's in PD in the US. --EncycloPetey (talk) 22:41, 31 December 2015 (UTC)

1936+70 would be 2006 , so it would I think be PD in the UK, and given the nature of the work, I am making a reasonable guess it's a Scottish Author w:Charles Sanford Terry (historian) ShakespeareFan00 (talk) 23:03, 31 December 2015 (UTC)

Can we confirm with some "official" notice somewhere that UK is Death+70yrs? Somewhere along the way I got it into my head that UK was +95 yrs, and if it is in fact 70, then that will simplify some of the work I'm wanting to do. --EncycloPetey (talk) 01:18, 1 January 2016 (UTC)
https://www.gov.uk/copyright/how-long-copyright-lasts can't get more official than thatShakespeareFan00 (talk)
Thanks! --EncycloPetey (talk) 02:14, 1 January 2016 (UTC)

Mrs Beeton 1907

It's now on Commons in a single version, thanks to Yannf's efforts : https://commons.wikimedia.org/wiki/File:Mrs_Beeton's_Book_of_Household_Management.djvu

If someone would like to migrate over, I think an older issue that was stalling this work (namely Contents table confusion, can be resolved). ShakespeareFan00 (talk) 23:05, 31 December 2015 (UTC)

OK folks - Why is this Index not finding the IDENTICALLY named file?

[[1]]?

It's not identically named, the script at Commons decided to do something clever (sigh)ShakespeareFan00 (talk) 01:45, 1 January 2016 (UTC)
I can't speak as to the underlying cause, but will note that (A) We had the same problem until recently with volumes of the EB 1911, and (B) In the pagelist you really need quotation marks around text (non-numeric) values. Sometimes spaces or other characters (such as parentheses) can cause the software to balk. --EncycloPetey (talk)
Looks like you have it sorted out now. --EncycloPetey (talk) 01:52, 1 January 2016 (UTC)
Partially, It still needs someone to migrate the split pagelist and the work done so far on the multiple parts. Probably an AWB job :( ShakespeareFan00 (talk) 01:57, 1 January 2016 (UTC)
Please follow up re:
For the love of God, please make a request to bulk move the existing Pages: under the two above Index: pages before creating anything under the brand new and fully complete (~2266 pages) Index: ? -- George Orwell III (talk) 02:00, 1 January 2016 (UTC)
That WAS the plan, I've marked the Index: as {{in use}} until someone with the technical skill can do the required bulk moves.

namely

Next question where to make a bulk move request that it actually gets actioned :) ShakespeareFan00 (talk) 02:17, 1 January 2016 (UTC)

New year

For all Wikisorcerers a very happy year 2016 with flowers at windows. Like this:

Flowers at the window
Flowers at the window

--Zyephyrus (talk) 20:47, 1 January 2016 (UTC)

Happy New Year everybody! Jpez (talk) 05:28, 2 January 2016 (UTC)
Yes, hurrah indeed and I hope everyone's 2016s are great and full of accurate proofreading and whatnot. And flowers. :) — Sam Wilson ( TalkContribs ) … 10:56, 2 January 2016 (UTC)

I've been working on the 1774 edition of Hannah Glasse's Art of Cookery for some time now. It's based on a transcription I found on a website that has since disappeared. I downloaded managed to downloaded it before it was taken down, however.

If anyone's interested in helping out, the remainder of the transcription can be found at User:Peter Isotalo/Glasse. It just needs some page separation and formatting of headings and the indexes. Other than that, it's quite complete with very few transcription errors.

Please note that "ſ" has consistently been transcribed as "s" for clarity.

Peter Isotalo (talk) 22:08, 3 January 2016 (UTC)

You could use Help:Match_and_split to upload the text to the corresponding pages in Page ns.— Mpaa (talk) 22:21, 3 January 2016 (UTC)
Happy to help if needed. — billinghurst sDrewth 22:28, 3 January 2016 (UTC)
Thanks to both of you. billinghurst, I kinda suck at automated tools and scripts, so please feel free to dig in.
Peter Isotalo (talk) 22:54, 3 January 2016 (UTC)
i see there is a recent scan at internet archive 1747 edition. [2]. Slowking4RAN's revenge 23:28, 4 January 2016 (UTC)
Good to know, but that scan hasn't been not proofread. The text I found was actually cleaned up. It has the occasional error and typo, but requires very little work. It would be interesting to have both editions here, but the one at the Internet Archive is going to require a ton of work to get up to snuff.
First editions, especially cookbooks, are actually only really interesting to collectors. They weren't printed in greater quantities than later editions and are generally not that representative of the work as a whole because they lack subsequent additions, corrections and errata. When it comes to works with a huge number of editions like The Art of Cookery, later versions are generally of greater benefit to both historians and the general reader.
Peter Isotalo (talk) 13:00, 5 January 2016 (UTC)

Busted scan

Seems that the scan of the 1774 edition left out two pages between 329 and 332 without anyone spotting the error. The transcription I found online has the text (see this diff), but it can't be verified with the existing scan. Is there any way to solve this?

Peter Isotalo (talk) 23:28, 12 January 2016 (UTC)

Sponsorship applications for Wikimania 2016 — close 9 January

For those who are contributors to English Wikisource and other Wikimedia wikis, I am wondering whether you have considered applying for a sponsorship to attend Wikimania 2016 to be held at Esino Lario, Italy, from 21–28 June 2016.

Also noting that the call for submissions of ideas for papers ends on 7 January. — billinghurst sDrewth 22:43, 3 January 2016 (UTC)

submission deadline extended until 17 January (some pushing for 31). [3]. Slowking4RAN's revenge 17:42, 4 January 2016 (UTC)
How many people here are thinking of going? I'm pretty keen. Never been to a Wikimainia before; sounds like fun! Would be nice to meet some Wikisource people. :) — Sam Wilson ( TalkContribs ) … 02:55, 5 January 2016 (UTC)
I plan on being there. I have submitted two proposals - one on audiovisual content, and the other on the place of humor in Wkimedia projects. BD2412 T 03:53, 5 January 2016 (UTC)
@Samwilson: There are hopes for a specific Wikisource component, even if it is a workshop prior, during the hackathon. There is information about wishes in the notes from Vienna. Further Aubrey says that he will talk to the organisers about what are their expectations and desires for WS stuff. Noting that workshops, and exploratory presentations of "how to", etc. are outside of the call for papers submissions process.— billinghurst sDrewth 04:34, 5 January 2016 (UTC)
Cool. I'll be up for all WS stuff (if I make it to Italy). — Sam Wilson ( TalkContribs ) … 05:13, 5 January 2016 (UTC)

Template limit

Can someone take an expert look at the end of this page and advise about the anomaly? Seems to be a recent phenomenon, earlier it was not there. Hrishikes (talk) 15:56, 4 January 2016 (UTC)

You are using the TOC templates which are an infamous culprit for hitting the template limit :) ShakespeareFan00 (talk) 17:04, 4 January 2016 (UTC)
As I said, earlier it was not so. Hrishikes (talk) 17:08, 4 January 2016 (UTC)
that’s what a template limit failure looks like. our smaller pages tend to make it rare here. (page by page) you tend to see it on unarchived talk pages. i think there is a lua solution,[4] but you may have to work around, without a few templates, (i.e. replace template with old school code) until someone re-codes the template. Slowking4RAN's revenge 17:36, 4 January 2016 (UTC)
Sadly becoming categorised as Category:Pages where template include size is exceeded as has occurred here covers a number of distinct cases. The relevant report is this one:
<!-- 
NewPP limit report
Parsed by mw1016
Cached time: 20160104062321
Cache expiry: 2592000
Dynamic content: false
CPU time usage: 2.006 seconds
Real time usage: 2.138 seconds
Preprocessor visited node count: 53830/1000000
Preprocessor generated node count: 0/1500000
Post‐expand include size: 2097152/2097152 bytes
Template argument size: 398195/2097152 bytes
Highest expansion depth: 14/40
Expensive parser function count: 0/500
Lua time usage: 0.033/10.000 seconds
Lua memory usage: 1,001 KB/50 MB
Number of Wikibase entities loaded: 1-->

<!-- 
Transclusion expansion time report (%,ms,calls,template)
100.00% 1775.732      1 - -total
 69.73% 1238.140    389 - Template:Dotted_TOC_page_listing
 19.85%  352.494    336 - Template:Dotted_TOC_page_listing/5
 12.90%  229.068      1 - Page:A_Sheaf_Gleaned_in_French_Fields.djvu/30
 11.50%  204.233      1 - Page:A_Sheaf_Gleaned_in_French_Fields.djvu/31
 10.90%  193.620      1 - Page:A_Sheaf_Gleaned_in_French_Fields.djvu/28
 10.36%  183.913      1 - Page:A_Sheaf_Gleaned_in_French_Fields.djvu/29
 10.21%  181.308      1 - Page:A_Sheaf_Gleaned_in_French_Fields.djvu/27
  7.20%  127.859    389 - Template:StripWhitespace
  5.25%   93.240      1 - Template:Header
-->
—which reveals the attribute Post‐expand include size is the one which has been exceeded. Propaganda notwithstanding, converting templates to LUA has little to no effect upon this particular category. Technically this ought to be addressed through manipulation of mw:Manual:$wgMaxArticleSize but this is impractical without developer assistance, leaving the option of splitting the page into smaller chunks the only viable option. AuFCL (talk) 22:20, 4 January 2016 (UTC)
in the meantime, would not some replacement of {{c| }} with <center> and {{right| }} with <align=right> work? this instance is close to the line. Slowking4RAN's revenge 01:56, 5 January 2016 (UTC)
Aren't those tags deprecated? Can we avoid using outdated tags? —Beleg Tâl (talk) 04:16, 5 January 2016 (UTC)
The page was displaying properly sometime back. What happened in the interim I don't know. It is tedious changing previously proofread pages because of snags occurring afterwards. Hrishikes (talk) 04:37, 5 January 2016 (UTC)
Somewhere recent changes have presumably pushed the page past the limit. One more template has become the page's after dinner mint (Monty Python analogy). Dotted TOC templates have complex templating per row to manage dots and whitespaces that chew through the limits. @AuFCL: is there the possibility to create and apply some common global classes, and in conjunction with the template to apply and account for different situations to impact said styles? — billinghurst sDrewth 04:50, 5 January 2016 (UTC)
The ability to style a column would be advantageous here, rather than having to apply {{sc}} or similar to each cell, thus puffing up the expansion size. Might be worth adding this component to the bug in Phabricator about wiki-styling of columns. — billinghurst sDrewth
The answer as ever is not as simple as everyone would want it to be. Or rather the simple answer simply does not serve for the diversity of cases each individual would like catered for. And there lies the rub: {{dotted TOC page listing}} and its ilk is a victim both of its own success and its various failings. Because it does some things superbly well there is always the temptation to add "one more feature" until the result eventually collapses into template singularity. On the other hand what it actually does only works on HTML desktop devices; poorly in mobile mode and not at all after conversion to ePub. And none of the ground-rules for any of those devices has shown any sign of settling down as yet.

As for replacing the simpler templates with HTML equivalents by all means knock yourself out but the very nature of "Post‐expand include size" does not lend itself to improvement by this method (other classes of "template include size is exceeded" may be addressed this way but not in this particular instance.)

And column-related classes haven't really been a stunning success story in experiments to date for a range of reasons: just ask George Orwell III. Not to mention the current page structure amounts to lots and lots of short tables; not at all the long-column structure that would be required to even consider this method of attack.

So the choice once more boils down to stripping out all of the dotted TOC componentry; splitting the page or possibly both. (And taking the risk that future development may undercut the situation entirely and having to face the embarrassment of finding out the wrong choice has been committed to and it will al have to be reworked anyway. To quote Hrishikes above: "It is tedious changing previously proofread pages because of snags occurring afterwards.") AuFCL (talk) 05:28, 5 January 2016 (UTC)

Nobody recalls that I polled for exactly this a few months ago? <sigh>

Again, the "bulk" styling of table-cells in the same table column can be done with some css3 definition acrobatics, a bit of community input in order to identify what are the most frequently encountered styling scenarios and some dedicated vetting upon implementation to insure we're not breaking anything in the process. -- George Orwell III (talk) 05:18, 5 January 2016 (UTC)

Another solution Is to try substituting some templates by inserting {{subst:TEMPLATE}}. —Justin (koavf)TCM 05:22, 5 January 2016 (UTC)

Not associated

┌──────┘
…and the problem goes away. Now if Beleg Tâl would please walk us through whatever actual failing motivated this change perhaps the negotiations can begin afresh (or until the next time this happens? As of course it will.)

For the record reverting that single change dropped Post‐expand include size: from 2097152/2097152 bytes to 1717204/2097152 bytes in this particular instance. AuFCL (talk) 06:21, 5 January 2016 (UTC)

"We've" always suspected the problem with this template-d approach resided with the manner of symbol regeneration/retention being used, the desired spacing between said symbols and any needed 'stripping of whitespace' from the first and/or last symbol -- specifically, this line in the template:
  • {{{dottext|{{Dotted TOC page listing/{{StripWhitespace|{{{4|{{{spaces|1}}}}}}}}|{{{symbol|.}}}}}{{{dotend|}}}}}}
so no surprises there.

There has to be a way to cut down on the number of "calls" made per the two offending templates. We probed Lua for a better solution with little fanfare as well as with little success already so what about trying to find a straight up javascript (jQuery) solution?

Where's Pathoschild and the TemplateScript thing of his? Could it provide a better way to achieve the same "dot leader" effect? What about defining all 10 spacing templates in .css and only loading that .css when a page needs it?

New Year; Old Questions -- George Orwell III (talk) 07:35, 5 January 2016 (UTC)

Uh huh. Remember User:AuFCL/common.js/jsleader.js abandoned since...over twelve months... (actually close to two years!) AuFCL (talk) 07:42, 5 January 2016 (UTC)
Almost forgot about that one; I was thinking more of the variation on a theme that we tried where rows and row of plain dots were defined as the ::before (or was it ::after?) pseudo-element's property content: in a special .css.

Regardless - the point is there must be a better way to accomplish this compared to the current approach somewhere out there in developer land. Problem is those that might be able to help us living out there don't really come through here much. -- George Orwell III (talk) 07:58, 5 January 2016 (UTC)

Thanks for resolving the current issue by template edit reversal. Hrishikes (talk) 09:08, 5 January 2016 (UTC)
My reasoning was specified in the edit summary: "double number of dots for larger screens". Currently the dots do not reach all the way across the screen. I did not realize that Wikisource couldn't handle a fix. —Beleg Tâl (talk) 14:22, 5 January 2016 (UTC)
is the issue ripe enough for a phabricator ticket? i see that is the venue to engage the developers; this is a multi-project wide, code issue that needs attention. Slowking4RAN's revenge 17:01, 5 January 2016 (UTC)
Hello, original creator of the Dotted TOC templates here... I just wanted to note that I created these back in 2008 doing the dots with CSS border style properties. But somebody replaced that with the insane template calls and loops and stuff. So what George Orwell III is suggesting, I think, is the way they were originally intended to work.
The CSS standards documents back then actually defined fairly specific parameters for controlling the dot sizing and spacing, it just hadn't been implemented in any of the browsers yet. I'd originally been hoping to set the dot size relative to font size by using font size units, so with any luck that's actually possible now. ❨Ṩtruthious ℬandersnatch❩ 13:55, 9 January 2016 (UTC)
That's just about the way I remember events too. The development of CSS standards for dot leaders became less and less since then and, the last I checked, they don't even apply to @screen anymore - just @paged (printing) settings if anything at all. When "we" revisited defining the various formatting styles thru css, folks did not like the fact it would limit the spacing and/or symbol options currently possible thru the current incarnation so that avenue for replacement was dropped as well. -- George Orwell III (talk) 23:15, 9 January 2016 (UTC)
Ha! I didn't even know about that, perhaps it wasn't around back in 2008. I was actually doing it by setting the border-style property on an empty element, like so—it appears some of the templates I created were left with the CSS intact after all.
Unfortunately it seems that the W3C has abandoned offering much control of border-style dots in CSS, but that sort of stuff is present in SVG: this example works for me in Chrome and Firefox on Linux, though it's using fixed-length lines. All sorts of fancy stuff like gradient-strokes and animation appears to be possible too...
So anyways, possibly the template limit issues could be gotten rid of with well-crafted inline SVG, if SVG is allowed? (I haven't been around here since SVG really started working cross-browser.) Though I'd certainly agree with you that if there are actually CSS styles for dot leaders that's really the ideal way to do it. — ❨Ṩtruthious ℬandersnatch❩ 17:36, 10 January 2016 (UTC)
To add to the topic creep: might as well consolidate the wish-list for tags the parser currently forbids but somebody has (reasonably recently) expressed a wish for. On my observations this is currently running at:

<aside>, CSS (i.e. features beyond scope of style, e.g. ::after, ::before etc.), <col> & <colgroup> etc., <img>, <svg>.

To which I might as well throw in the anti-options of clarifying the rules of when and how to stop the parser mucking around with inserting/deleting/subverting <p> usage, short of indiscriminate <div> saturation; and proper recursive nested extension-tag (e.g. <poem>, <ref>) handling.

Have I missed anything else precious to anybody (if so please expand appropriately)? AuFCL (talk) 23:27, 10 January 2016 (UTC)

Long time since I've been here, and the News project seems practically deserted (last updated in July '15). Any plans to revive it? Regards,C. F. 13:42, 6 January 2016 (UTC)

i see user User:AdamBMorgan lost interest, Wikisource:News/2014-04. if you took up the task, i doubt objection. also consider more multilingual version for m:Wikisource Community User Group, such as GLAM/Newsletter. Slowking4RAN's revenge 12:49, 7 January 2016 (UTC)
Um, I probably won't be very active till the end of March or so (exams), so I'd prefer it if someone else could take up the task now... or if it remains inactive till then, I have no problem helping out from that point. C. F. 15:27, 7 January 2016 (UTC)

A Gadget bug

When selecting the "Enable OCR button Button in Page:namespace", it eliminates the Proofreading sub-toolbar of the enhanced editor toolbar. As far as I know, there are no toolbar related scripts in my common.js, or in the typoscan.js, so I assume it's a bug of some sort. P.S: When disabling the OCR gadget, the toolbar returns. — Ineuw talk 18:34, 10 January 2016 (UTC)

Behaviour not encountered here. @Ineuw: Has it stopped happening for you? AuFCL (talk) 23:29, 10 January 2016 (UTC)
Unchecked the OCR, and it went back to normal.— Ineuw talk 02:15, 11 January 2016 (UTC)
@AuFCL: Rechecked the OCR because seeing is believing. :-) File:OCR button disables Proofread toolbar.jpg
I’ve been seeing this toolbar intermittently disappearing, for about a month. sometimes will reappear with a refresh. less frequently recently. Slowking4RAN's revenge 03:06, 11 January 2016 (UTC)
I did not doubt Ineuw; in fact got so far as to prepare screenshots here before realising uploading same would be just cruel. Another preference might be affecting this overall: under Editing I have only "Enable enhanced editing toolbar" checked, with both "Show edit toolbar" and "Enable wizards for inserting…" unchecked. Long shot? AuFCL (talk) 04:17, 11 January 2016 (UTC)
This is my Edit setup:
Yes Edit pages on double click
Yes Show header and footer fields toggle the visibility of the noinclude header and footer sections when editing in the Page namespace
Yes Horizontal layout when editing in the Page: namespace (toggles toggles between side-by-side and horizontal layouts)
Yes Prompt me when entering a blank edit summary
Yes Enable enhanced editing toolbar
Yes Show preview before edit box

The rest are blanked. — Ineuw talk 06:42, 11 January 2016 (UTC)

Just to clarify. I only mentioned this in case it happens to others, but I don't need the OCR. So don't waste time on this on my account. — Ineuw talk 08:54, 11 January 2016 (UTC)
I owe you an apology. OCR loaded but Proofread toolbar did not about 30 times in succession today (and then worked correctly exactly once just for kicks afterwards.) AuFCL (talk) 23:51, 11 January 2016 (UTC)
yeah, i’ve turned off ocr button. need a refresh of ocr, and cleanup of toolbar. Slowking4RAN's revenge 04:13, 14 January 2016 (UTC)

16:59, 11 January 2016 (UTC)

Mein Kampf

There is reasonable coverage of Adolf Hitler's minor works here, but not his books. I think it would be very useful to have the book online in a reputable/transparent public domain "edition", in both English and German. Indeed, it might a good candidate for a parallel edition -- and an especially good candidate for any kind of parallel annotation or in-line discussion tool. I don't know if Wikisource has anything like this available "by default" but, for example, hypothes.is would be one way to bring in the annotation and discussion layer. It looks as though the Murphy translation into English will enter the public domain in Europe next year. I'm not sure what the most reputable source would be for a German version of the text, and I don't speak or read German anyway, so I'll leave that open ended. If there's another place to have conversations that pertain to multiple languages, please direct me, or feel free to put a link to this thread there. Arided (talk) 21:07, 11 January 2016 (UTC)


Hi there,

the German original version of the book is in Public Domain since the 1th of January 2016. However, I don't think, that any version of this piece of hate propaganda should be hosted on any WMF wiki. The German Wikisource community decided this already half a year ago. The book is availible on several pages in the WWW, so if you really want to find it (and use it - at least I hope so - only for educational purposes), it's no problem. Most of the readers wouldn't find the text via the Wikisource Searching box, but via Google or annother searching engine, and it wouldn't be a really big problem for Wikisource if they read this particular text somewhere else, it could even be some sort of bad PR for WS if this text would be hosted here. (The Government of Bavaria, which is involved in this a lot as it's for some reason the ancillary executor of Adolf Hitler, has also announced, that everyone publishing this text uncommented on the web or somewhere else would be reported to the police, as anyone publishing the work may (complicated juristical situation) commit the german criminal offences §86 StGB (Distribution of propaganda material of anticonstitutional organizations/Verbreiten von Propagandamitteln verfassungswidriger Organisationen) and 130 StGB (demagoguery/Volksverhetzung), however, if you don't live in Germany, nothing will happen, if you do live in Germany, probably nothing will happen too). Greetings from Germany -- Milad A380 (talk) 22:13, 11 January 2016 (UTC)

Aside from the social and political aspects of hosting Mein Kampf, I don't think the German text should be hosted on English Wikisource (and whether it is or not hosted on German Wikisource is of course up to them). When one becomes available, I'd say that it is worth having an English translation here though. Does that sound disrespectful? I mean, of course I'm not advocating that WS hosts Nazi hate material! I don't see it like that. We are a library, and do not cast judgement on the material we host. As you say Milad A380, the book is available all over the place anyway, so it's not like we're going to prevent anyone looking at it by not hosting it. What we gain by hosting it is just another small step towards a more complete library. — Sam Wilson ( TalkContribs ) … 23:55, 11 January 2016 (UTC)
the English and German sensibilities are different on this material. not at guttenberg; it is at internet archive [17]. Slowking4 02:52, 12 January 2016 (UTC)
Hi Milad A380, thanks for the thoughtful and informed reply. To be clear, my idea would be to have the text online primarily so that it could be discussed and (hopefully) rendered powerless. I'm actually Jewish and no fan of Hitler or neo-nazis. I'm not even a fan of UKIP. My opinion is that a rich public domain and vibrant public discourse are the best assurances against dictatorships, hate, bigotry, and so on. There is a technical question as well as a "content" question here, about how best to have a civilized discussion about reprehensible things. But of course public discussion shouldn't be reserved just for dubious texts! I'll start another thread to take up the technical questions. With regards, Arided (talk) 21:27, 13 January 2016 (UTC)


I would suggest that a decision by German Wikisource not to host Mein Kampf would violate the Foundation's Guiding Principles. However that's their business. Hesperian 06:43, 12 January 2016 (UTC)

Irrespective of the proprieties of hosting this, has anybody actually read the work in question? It is a difficult slog and probably won't be fun to proof, content notwithstanding. AuFCL (talk) 07:07, 12 January 2016 (UTC)


For this book in perticular, I don't believe that a community translation is a good idea. New translations should be made by specialists.

We should not focus only on Hitler, there is a lot of importants people of the World War II who died in 1945. On the French Wiksiource, I started proofreading The Doctrine of Fascism by Benito Mussolini. Pyb (talk) 14:58, 12 January 2016 (UTC)

Reprehensible or not, the book is clearly an important historical document, including the officially sanctioned, contemporary Murphy edition. Once it's PD, I don't see any reason why we should waste any time time on trying to block it
Peter Isotalo (talk) 22:58, 12 January 2016 (UTC)

Let me note that Wikisource uses US copyright law, wherein Mein Kampf has probably always been public domain, and the URAA presumably did not resurrect it. Translations are most likely under copyright for 95 years from publication, which is still a while for works of the 1930s.--Prosfilaes (talk) 06:10, 13 January 2016 (UTC)

Technical question: "offset" annotation tools

If you're not familiar with the idea of "offset annotations" please browse to https://via.hypothes.is/https://en.wikisource.org/wiki/Alice%27s_Adventures_in_Wonderland_(1866)/Chapter_1 where you can see an example. Here, I've used the hypothes.si proxy (via.hypothes.is) to add an annotation to the version of Alice's Adventures in Wonderland hosted on Wikisource.

Hypothes.is might be the most technically interesting example of an annotation tool (notably, it's open source), but there are many other examples, see w:Web_annotation for a feature grid comparison. Hypothes.is is currently lacking site-wide integration of comments via RSS feeds, although it has page-specific aggregation and author-specific aggregation -- and the developers assured me that they are working on site-wide integrations. It's not so hard to combine several page-specific aggregators into a broader "funnel" to aggregate comments. Hypothes.is also aims to integrate a reputation model, which should be interesting and useful: https://hypothes.is/workshop/

My sense is that this sort of tool could be very useful for discussing classic and modern texts. Since there are many different tools, it may make sense for different "communities" to do their annotations and discussions in different ways. Is there a Wikisource "policy" (or "routemap") around annotation tools? Arided (talk) 21:45, 13 January 2016 (UTC)

See Help:Annotating. Note that there must be a completely unannotated version first. You should also be aware that Wikisource does not provide a platform for discussing a text. That concept belongs to WikiBooks. Our primary aim to provide the text as it was published. An annotated copy here would provide explanatory notes (or wikilinks) to assist the modern reader with understanding the text in its original context. For example, if a text from 1732 refers to "the Prime Minister", an annotation linking to Robert Walpole would assist the reader. We would not expect interpretive annotations or matters of form-criticism (or indeed outlines for high-schoolers to use in their assignments). Beeswaxcandle (talk) 06:17, 14 January 2016 (UTC)

This is taking a long time, anyone want to assist? ShakespeareFan00 (talk) 20:37, 14 January 2016 (UTC)

That is why we have projects where people can join in based on their interests and that a work will take time. Means that we don't have to keep prodding people who quite possibly have no interest. — billinghurst sDrewth 01:58, 18 January 2016 (UTC)

1923 copyright

Does anyone know what the date is that we are able to upload/transcribe 1923 works? Thanks, Londonjackbooks (talk) 17:05, 16 January 2016 (UTC)

The 95 year rule means that we can start in 2019 (for some works—year of death still plays a part). So, we've got 3 years left to get everything pre-1923 done!! Beeswaxcandle (talk) 17:54, 16 January 2016 (UTC)
That long, huh? What if an author died in 1993 with a book published in 1923? Am I going to have to wait longer than I thought? <cringe> ... Thanks! Londonjackbooks (talk) 18:00, 16 January 2016 (UTC)
yes, 2019 assuming the mickey lobby does not work their magic again. see also https://web.law.duke.edu/cspd/publicdomainday. Slowking4RAN's revenge 02:43, 17 January 2016 (UTC)

This currently links to a book first published 1947 according to WP. May I overwrite it, transcluding Index:The Hog.djvu? Cheers, Zoeannl (talk) 07:01, 18 January 2016 (UTC)

Also, similar issue with Sheep being redirected.
Should all such "common" names have a disambiguation page set up? And the books named with author identifiers e.g. [[The Hog (Youatt)]]? Zoeannl (talk) 07:30, 18 January 2016 (UTC)
These will both need to have disambiguation pages and the current works under those titles moved. Yes, we usually use author identifiers for the disambiguated titles. Beeswaxcandle (talk) 07:49, 18 January 2016 (UTC)
So I’ve renamed my titles (Youatt) There are 6; Sheep, Cattle, and The—Dog, Horse, Pig, Hog. Is there a "needs disambiguation" template like "missing x"? Zoeannl (talk) 09:02, 18 January 2016 (UTC)
Blech. It looks as though The Hog consists only of the first chapter, is unsourced, and in 8 years never had the rest of the work added. It may be a candidate for deletion, unless someone can find more of it. However, for a common title like this, it is still best practice to set up disambiguative titles using parentheses, as Beeswaxcandle has suggested, and to make the page at the generic title a disambiguation page. We have no template for "needs disambig"; just start one yourself following one of the existing models, such as The Birds. --EncycloPetey (talk) 19:00, 18 January 2016 (UTC)
Rather than a template for needs disambig, I would much prefer that we listed them here or WS:AN (if they need an admin) and just get them done; the need is usually pressing. I am always happy to do them, and have tools to do them if it is preferred that they are not listed here or there, then stick them on my user talkpage (though that relies on my availability, rather than whomever is available). Wikisource:naming conventions and its talk page gives guidance and examples of how to disambiguate. Noting that where we know that we are presumably going to disambiguate into the future, that we can have the one page disambiguated per the convention and create a redirect from the parent term. We have numbers of such cases already. — billinghurst sDrewth 03:10, 19 January 2016 (UTC)
The work where The Hog was first published was renewed (R603042). I can imagine scenarios where the renewal was not valid and the URAA didn't restore it, but it looks like it's still in copyright.--Prosfilaes (talk) 23:48, 19 January 2016 (UTC)
To note that it has subsequently been nominated for deletion. — billinghurst sDrewth 03:16, 20 January 2016 (UTC)

Wikidata in Australia — some opportunities

Hi all. Andy Mabbitt pigsonthewing will be in Oz during February (8th-20th) doing a flying visit through Melbourne, Sydney, Canberra and Perth covering Wikidata and ORCID from a number of aspects. For those who have an interest there will be some opportunities to meet and learn from Andy, some somewhat limited. When WMAU has details available I will look to share, or Gnangarra will, and we can put up more formal announcements then. — billinghurst sDrewth 12:44, 18 January 2016 (UTC)

There's a few details at https://wikimedia.org.au/wiki/Wikidata_Tour_Down_Under — note that two of the events are general Wikimedia meetups, so everyone's welcome to those at least. I guess the conference ones are only for conference delicates. — Sam Wilson ( TalkContribs ) … 02:49, 19 January 2016 (UTC)
Nice Freudian slip. Perhaps you meant "delegates" rather than "delicates"? AuFCL (talk) 03:05, 19 January 2016 (UTC)
Oh! hehe, yes! Oops. I'm sure they're all terribly hardy types. Although, one doesn't need as thick a skin on wikisource as one does on wikipedia, I sometimes think... :) — Sam Wilson ( TalkContribs ) … 03:16, 19 January 2016 (UTC)
Andy Mabbitt? Who he? ;-) Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 16:55, 22 January 2016 (UTC)

My visit has been extended; after Australia, I'll be spending a few days in Jakarta, Indonesia, again giving talks on Wikidata GLAM people and Wikimedians. Do join us if you can, and please invite your Wikimedia, OpenData, OpenKnowledge, GLAM or OpenStreetMap contacts in those countries to come along. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 14:44, 25 January 2016 (UTC)

Use of "A" and "The" in disambiguation pages

I've noticed that some disambiguation pages include titles with "A" and "The", and some have separate pages for such titles. For example: Appeal disambiguates works called Appeal, An Appeal, or The Appeal, but Song and A Song are separate (though interestingly The Song is included in Song).

I prefer the idea of having similar titles disambiguated on the same page, especially since articles are not consistently used and often redirected (especially for encyclopedias, see The Dictionary of National Biography, Encyclopedia Americana, and The Book of Common Prayer (ECUSA) as examples). For this reason I think it would be a good idea to merge similar disambig pages such as Song/A Song or Hymn/A Hymn. Are there any reasons why I should not do so? —Beleg Tâl (talk) 17:41, 18 January 2016 (UTC)

If there are not too many disambiguation entries with "A" and "The", then merging the disambiguation pages is fine. Otherwise it's better to keep them separate (like the case of Song and A Song). --Neo-Jay (talk) 17:58, 18 January 2016 (UTC)
On the whole, I agree with combining them as "best practice", but aside from Neo-Jay's point, there will be some situations that do not fall neatly into the preferred pattern.
Of course, there may be situations where the same work is known and published with with and without "The" in the title, such as Aristophanes' Birds, which is variously called Birds or The Birds depending upon the translator / publisher. In such situations it makes perfect sense to merge both versions of the titles, because there would be no simple means of separating versions of those titles with and without the definite article.
And if we combine titles, do we combine singular and plural forms in addition to those with/without the article? That is, if we combine Song, A Song, and The Song, should we also combine into that Songs, The Songs, etc.? In some languages they would, because of the relative insignificance of inflectional endings for title forms. It becomes a can of worms.
But there will also be situations where separate listings can be more appropriate. One such set of situations that springs to mind includes situations where one version of the title predominantly takes a single meaning, as with the titles of books of the Bible. Listing together titles such as Exodus and The Exodus, or Romans and The Romans. And, as Neo-Jay points out, sometimes the list of items to disambiguate grows very long, as in cases where we have multiple encyclopedic articles with a given title, in addition to journal articles, books, poems, and other works. In these situations, it is worth considering a split of disambiguation pages. --EncycloPetey (talk) 18:52, 18 January 2016 (UTC)
Hm, okay. I would be fine with conglomerating all of them, but it looks like this would be mostly best left as is. Thanks. —Beleg Tâl (talk) 22:24, 18 January 2016 (UTC)
I think that we can have a general approach of an agglomerated page, and allow for exceptions where someone can propose why a variation exists. In time we will build some guidance on what we have accepted as variations. Also part of that mix is what is the preferred target page, and my exposure to the works would generally say go without, as often with titles you see the "The" or "A" dropped when referred to in other works. We should also have an awareness to what has been done at enWP and Wikidata as part of our conversation.

As such, to me it makes sense to me to present a case at WS:S for merging any existing pages, and seek a consensus on what to do; and for future cases where we split a page that the argument can be made here to why to split by the same process. There can be direction that we can add to Help:Disambiguation. Also should be looking to apply similar approach to version/disambiguation/translation disambiguation. — billinghurst sDrewth 01:09, 19 January 2016 (UTC)

That sounds sensible. I would further suggest, as a general policy, that if Hog exists as a disambiguation page, then A Hog and The Hog should necessarily be either disambiguation pages or redirects to Hog. That is, neither A Hog nor The Hog should be a copy of a work when the main location serves to disambiguate. So, given that Hog is a disambiguation page, all works of that title should exist under titles that include some further differentiation, such as an author name, date, translator, or should be a subpage of a larger work, or something to set it apart from the main title form. --EncycloPetey (talk) 03:21, 19 January 2016 (UTC)
Sounds reasonable to me, from my experiences. I would like to hear others' opinions too, as broadest input is valuable. — billinghurst sDrewth 06:10, 19 January 2016 (UTC)
Yes, seems like a good way to reduce confusion. — Sam Wilson ( TalkContribs ) … 06:15, 19 January 2016 (UTC)

17:55, 18 January 2016 (UTC)

2016 WMF Strategy consultation

Hello, all.

The Wikimedia Foundation (WMF) has launched a consultation to help create and prioritize WMF strategy beginning July 2016 and for the 12 to 24 months thereafter. This consultation will be open, on Meta, from 18 January to 26 February, after which the Foundation will also use these ideas to help inform its Annual Plan. (More on our timeline can be found on that Meta page.)

Your input is welcome (and greatly desired) at the Meta discussion, 2016 Strategy/Community consultation.

Apologies for English, where this is posted on a non-English project. We thought it was more important to get the consultation translated as much as possible, and good headway has been made there in some languages. There is still much to do, however! We created m:2016 Strategy/Translations to try to help coordinate what needs translation and what progress is being made. :)

If you have questions, please reach out to me on my talk page or on the strategy consultation's talk page or by email to mdennis@wikimedia.org.

I hope you'll join us! Maggie Dennis via MediaWiki message delivery (talk) 19:06, 18 January 2016 (UTC)

i would encourage everyone to vote early and often. there is a "Align efforts between our affiliate organizations and the Wikimedia Foundation to increase local language and community coverage on key initiatives." Slowking4RAN's revenge 00:20, 19 January 2016 (UTC)

CC0 texts about artworks at the Rijksmuseum

Hello! The API of the Rijksmuseum Amsterdam is CC0. It includes explanatory texts, often in multiple languages, that are shown alongside the artworks in the museum. (Example for their most famous painting.) A Rijksmuseum staff member kindly offered me and a few colleague volunteers to send us an export of all these texts so that we can process them more easily and integrate them in Wikimedia projects. In any case, we want to reference them on Wikidata (i.e. link them to the paintings' Wikidata items), but Wikidata is not suited for longer texts. I was wondering what would be the best place in the Wikimedia ecosystem for these texts, and I thought about Wikisource - 1 page per artwork text, with the translations in various other language Wikisource wikis. Does anyone object to this, or have any thoughts or tips? Thanks! Spinster (talk) 07:36, 20 January 2016 (UTC)

how would you like to format this work? would it be "Rijksmuseum Collection explanatory texts", with a "chapter" on each work? how would you like to index it here? is there a "base language" with translations? Slowking4RAN's revenge 02:03, 21 January 2016 (UTC)
@Spinster: What about commons? How long are the texts, on average, and in the extreme? Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 16:57, 22 January 2016 (UTC)

no header tag

apparently the no header tag is flagging pages with headers Slowking4RAN's revenge 04:44, 22 January 2016 (UTC)
oh i see a missing close bracket gives this error. very unhelpful. if someone could incorporate the red broken field that would be helpful. Slowking4RAN's revenge 04:46, 22 January 2016 (UTC)
The filter looks for a whole template. I am not certain what else you are expecting it to do when something is awry. Always willing to look to improve our error handling, however I am not sure our your meaning of the "red broken field". The abuse filter response is at Mediawiki:Headerless-edit-notice and you can make suggestions to improve the message on its talk page. — billinghurst sDrewth 04:25, 24 January 2016 (UTC)
well over on commons, and wikipedia, they now have red warnings when you have broken fields in templates. the "no template" warning is not intuitive, when it’s a broken template, but now i know so it's all good. & i love the circularity there: "leave a message at scriptorium". Slowking4RAN's revenge 21:18, 24 January 2016 (UTC)

Importing another wiki

I'm working with a client who have an academic, technical wiki which they are intending to sunset at the end of their project. (Details are currently confidential, sorry.) The content is of the standard one would expect in a book, written by experts in their field. The wiki is running "vanilla" MediaWiki, with no templates, and has a number of open-licensed images. There is extensive use of <math> markup.

They are willing to make it available under an open licence. In principle, is it possible for us to upload this material to Wikisource? Is there an automatic or semi-automatic tool that could be used to do the grunt work. Some staff time would be available to do any checking and polishing needed. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 16:56, 22 January 2016 (UTC)

is there a reason not to upload it on internet archive, transfer to commons, and upload here? (i.e. the normal process) harder than a cut and paste, but more places = easier to find. btw, here is a born digital document as an example Oral Literature in the Digital Age: Archiving Orality and Connecting with Communities. Slowking4RAN's revenge 20:05, 22 January 2016 (UTC)
Thank you. No; but I wasn't aware that that was the "normal process". Can you point me at a tutorial or similar for it, please? Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 22:25, 22 January 2016 (UTC)
Help:Adding texts & Help:Internet Archive. a little tl;dr, and as usual the tools are easter eggs. Slowking4RAN's revenge 04:04, 24 January 2016 (UTC)
Thank you. I'm not clear what the advantage in that method is - the content is already available in Wiki markup. If it's on the IA, it will be as archived web pages, not a book. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 15:50, 25 January 2016 (UTC)
internet archive is a stable repository, and does format conversion. they have books, webpages, video, videogames, etc. no drama there. you can save it as you see fit, and then incorporate it at wiki-where-ever. and hopefully use as a reference, getting some eyeballs. Slowking4RAN's revenge 03:42, 26 January 2016 (UTC)
Is there a dump? ShakespeareFan00 (talk) 19:21, 23 January 2016 (UTC)
No, but there could be once it's been finished. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 14:49, 25 January 2016 (UTC)

Taking a step back, are we talking a published work? A peer-review work? ... things that would bring it into scope ... WS:WWI. Trying to ensure that it does belong here rather than at Wikibooks. That said if they are in a MW wiki they should be able to export it as XML, their Special:Export and we can have it imported here through Special:Import. More links at mw:Import/Exportbillinghurst sDrewth 13:02, 23 January 2016 (UTC)

It's not peer reviewed, it's more in in the nature of a textbook. It's not published, though it could have been. The website may be made public for a while, to facilitate third-party archiving, before it is deleted. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 20:58, 23 January 2016 (UTC)
@Pigsonthewing: It is not in what we would call our "normal" what we include, and would somewhat be an exemption. I feel that it is more aligned to Wikibooks "the open-content textbooks collection that anyone can edit." b:Wikibooks:What is Wikibooks. I would suggest asking at b:Wikibooks:Reading room first. — billinghurst sDrewth 04:16, 24 January 2016 (UTC)
Done: en:wikibooks:Wikibooks:Reading room/General#Importing another wiki. That said, I do see it falling more on this side of the fence. It will be a complete, finished work by the time they're ready to donate it. Presumably, if it's in Wikibooks, it can't be cited as a reference in Wikipedia? Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 15:38, 25 January 2016 (UTC)

Creator namespace creation tool for Commons

There is a Magnus tool available at Toollabs that enables the quick creation of the Creator: ns pages at Commons, which is particularly useful for use with the Book template upload tool. The tool uses Wikidata so it is always worthwhile checking that the data is up-to-date there prior to creating the template. Of course, the tool can be used at any time to refresh a creator. Page.

billinghurst sDrewth 11:22, 24 January 2016 (UTC)

nice, works well. Slowking4RAN's revenge 21:44, 24 January 2016 (UTC)

16:38, 25 January 2016 (UTC)

“Free the Law” initiative

scanning announcement = impending work to do? http://news.harvard.edu/gazette/story/2015/10/free-the-law-will-provide-open-access-to-all/ -- Slowking4RAN's revenge 19:26, 25 January 2016 (UTC)

Caption to the top, to the left.

Using {{FIS}}, how do I place the caption to the top of a image, and to the left? Also, what is the notation for a caption with hanging indent? Working on Meat for Thrifty Meals, pages 15 and 13. Cheers, Zoeannl (talk) 02:14, 26 January 2016 (UTC) Sorry, meant to ask in help.

@Zoeannl: Whilst not ruled out {{FIS}} is probably not the best choice of image template for use here; something like {{overfloat image}} might be more appropriate. If you really have your heart set on using the former then I suggest modifying the image margin/padding to reserve a blank area beside the actual image then use position:relative; etc. style directives upon <span>s within caption to locate text fragments over that blank space. Page 13 appears to have text duplicating that which appears within the image itself. Did you intend the duplication to be reformed into an image alt= declaration instead? AuFCL (talk) 06:57, 26 January 2016 (UTC)

Future IdeaLab Campaigns results

Last December, I invited you to help determine future ideaLab campaigns by submitting and voting on different possible topics. I'm happy to announce the results of your participation, and encourage you to review them and our next steps for implementing those campaigns this year. Thank you to everyone who volunteered time to participate and submit ideas.

With great thanks,

I JethroBT (WMF), Community Resources, Wikimedia Foundation. 23:49, 26 January 2016 (UTC)

Vote of confidence

Hi all,

The annual administrator confirmation of Geo Swan has met the criteria for a community vote of confidence. Continued access to administrator tools will be decided by a simple majority of votes of established community members. You are invited to express your view / cast your vote at Wikisource:Administrators#Geo Swan.

Hesperian 03:43, 27 January 2016 (UTC)